Jun 25 16:26:07.151760 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:26:07.151813 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:26:07.151836 kernel: BIOS-provided physical RAM map: Jun 25 16:26:07.151847 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 16:26:07.151856 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 16:26:07.151867 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 16:26:07.151879 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jun 25 16:26:07.151892 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jun 25 16:26:07.151903 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 16:26:07.151919 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 16:26:07.151930 kernel: NX (Execute Disable) protection: active Jun 25 16:26:07.151942 kernel: SMBIOS 2.8 present. Jun 25 16:26:07.151952 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 25 16:26:07.151964 kernel: Hypervisor detected: KVM Jun 25 16:26:07.151977 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:26:07.151994 kernel: kvm-clock: using sched offset of 6521190288 cycles Jun 25 16:26:07.152008 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:26:07.152021 kernel: tsc: Detected 2294.608 MHz processor Jun 25 16:26:07.152051 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:26:07.152065 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:26:07.152077 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jun 25 16:26:07.152089 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:26:07.152101 kernel: ACPI: Early table checksum verification disabled Jun 25 16:26:07.152113 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jun 25 16:26:07.152129 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:07.152142 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:07.152154 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:07.152167 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 25 16:26:07.152180 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:07.152192 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:07.152205 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:07.152217 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:26:07.152236 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 25 16:26:07.152249 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 25 16:26:07.152261 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 25 16:26:07.152273 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 25 16:26:07.152285 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 25 16:26:07.152298 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 25 16:26:07.152310 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 25 16:26:07.152323 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:26:07.152347 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 16:26:07.152375 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 25 16:26:07.152390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 25 16:26:07.152421 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jun 25 16:26:07.152435 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jun 25 16:26:07.152450 kernel: Zone ranges: Jun 25 16:26:07.152464 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:26:07.152483 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jun 25 16:26:07.152497 kernel: Normal empty Jun 25 16:26:07.152511 kernel: Movable zone start for each node Jun 25 16:26:07.152525 kernel: Early memory node ranges Jun 25 16:26:07.152539 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 16:26:07.152554 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jun 25 16:26:07.152568 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jun 25 16:26:07.152582 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:26:07.152596 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 16:26:07.152616 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jun 25 16:26:07.152630 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 16:26:07.152644 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:26:07.152658 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:26:07.152672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 16:26:07.152689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:26:07.152704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:26:07.152718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:26:07.152732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:26:07.152751 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:26:07.152771 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 16:26:07.152784 kernel: TSC deadline timer available Jun 25 16:26:07.152797 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 16:26:07.152810 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 25 16:26:07.152823 kernel: Booting paravirtualized kernel on KVM Jun 25 16:26:07.152835 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:26:07.152847 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 16:26:07.152860 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jun 25 16:26:07.152879 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jun 25 16:26:07.152891 kernel: pcpu-alloc: [0] 0 1 Jun 25 16:26:07.152903 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 25 16:26:07.152915 kernel: Fallback order for Node 0: 0 Jun 25 16:26:07.152928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jun 25 16:26:07.152940 kernel: Policy zone: DMA32 Jun 25 16:26:07.152955 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:26:07.152969 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:26:07.152988 kernel: random: crng init done Jun 25 16:26:07.153002 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:26:07.153016 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:26:07.153029 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:26:07.153044 kernel: Memory: 1967112K/2096600K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 129228K reserved, 0K cma-reserved) Jun 25 16:26:07.155849 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 16:26:07.155887 kernel: Kernel/User page tables isolation: enabled Jun 25 16:26:07.155903 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:26:07.155918 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:26:07.155950 kernel: Dynamic Preempt: voluntary Jun 25 16:26:07.155964 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:26:07.155979 kernel: rcu: RCU event tracing is enabled. Jun 25 16:26:07.155992 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 16:26:07.156007 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:26:07.156022 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:26:07.156037 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:26:07.156073 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:26:07.156089 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 16:26:07.156111 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 16:26:07.156128 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:26:07.156143 kernel: Console: colour VGA+ 80x25 Jun 25 16:26:07.156159 kernel: printk: console [tty0] enabled Jun 25 16:26:07.156172 kernel: printk: console [ttyS0] enabled Jun 25 16:26:07.156185 kernel: ACPI: Core revision 20220331 Jun 25 16:26:07.156199 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 16:26:07.156213 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:26:07.156227 kernel: x2apic enabled Jun 25 16:26:07.156244 kernel: Switched APIC routing to physical x2apic. Jun 25 16:26:07.156257 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:26:07.156271 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jun 25 16:26:07.156285 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jun 25 16:26:07.156302 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 25 16:26:07.156317 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 25 16:26:07.156333 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:26:07.156349 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:26:07.156366 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:26:07.156403 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:26:07.156416 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 25 16:26:07.156429 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:26:07.156445 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:26:07.156460 kernel: MDS: Mitigation: Clear CPU buffers Jun 25 16:26:07.156475 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:26:07.156496 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:26:07.156518 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:26:07.156537 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:26:07.156560 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:26:07.156580 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 25 16:26:07.156617 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:26:07.156636 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:26:07.156656 kernel: LSM: Security Framework initializing Jun 25 16:26:07.156686 kernel: SELinux: Initializing. Jun 25 16:26:07.156701 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:26:07.156719 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:26:07.156731 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 25 16:26:07.156746 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:26:07.156765 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:26:07.156795 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:26:07.156833 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:26:07.156847 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:26:07.156860 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:26:07.156873 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 25 16:26:07.156887 kernel: signal: max sigframe size: 1776 Jun 25 16:26:07.156908 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:26:07.156923 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:26:07.156937 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:26:07.156954 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:26:07.156969 kernel: x86: Booting SMP configuration: Jun 25 16:26:07.156982 kernel: .... node #0, CPUs: #1 Jun 25 16:26:07.156995 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:26:07.157009 kernel: smpboot: Max logical packages: 1 Jun 25 16:26:07.157023 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jun 25 16:26:07.157066 kernel: devtmpfs: initialized Jun 25 16:26:07.157081 kernel: x86/mm: Memory block size: 128MB Jun 25 16:26:07.157096 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:26:07.157112 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 16:26:07.157126 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:26:07.157141 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:26:07.157157 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:26:07.157173 kernel: audit: type=2000 audit(1719332764.936:1): state=initialized audit_enabled=0 res=1 Jun 25 16:26:07.157191 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:26:07.157214 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:26:07.157229 kernel: cpuidle: using governor menu Jun 25 16:26:07.157244 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:26:07.157259 kernel: dca service started, version 1.12.1 Jun 25 16:26:07.157274 kernel: PCI: Using configuration type 1 for base access Jun 25 16:26:07.157289 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:26:07.157304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:26:07.157320 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:26:07.157338 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:26:07.157360 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:26:07.157376 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:26:07.157394 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:26:07.157408 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:26:07.157423 kernel: ACPI: Interpreter enabled Jun 25 16:26:07.157438 kernel: ACPI: PM: (supports S0 S5) Jun 25 16:26:07.157453 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:26:07.157468 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:26:07.157483 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:26:07.157503 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 16:26:07.157518 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:26:07.158002 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:26:07.160450 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 16:26:07.160643 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jun 25 16:26:07.160670 kernel: acpiphp: Slot [3] registered Jun 25 16:26:07.160689 kernel: acpiphp: Slot [4] registered Jun 25 16:26:07.160726 kernel: acpiphp: Slot [5] registered Jun 25 16:26:07.160741 kernel: acpiphp: Slot [6] registered Jun 25 16:26:07.160757 kernel: acpiphp: Slot [7] registered Jun 25 16:26:07.160771 kernel: acpiphp: Slot [8] registered Jun 25 16:26:07.160785 kernel: acpiphp: Slot [9] registered Jun 25 16:26:07.160799 kernel: acpiphp: Slot [10] registered Jun 25 16:26:07.160813 kernel: acpiphp: Slot [11] registered Jun 25 16:26:07.160828 kernel: acpiphp: Slot [12] registered Jun 25 16:26:07.160843 kernel: acpiphp: Slot [13] registered Jun 25 16:26:07.160858 kernel: acpiphp: Slot [14] registered Jun 25 16:26:07.160880 kernel: acpiphp: Slot [15] registered Jun 25 16:26:07.160898 kernel: acpiphp: Slot [16] registered Jun 25 16:26:07.160915 kernel: acpiphp: Slot [17] registered Jun 25 16:26:07.160933 kernel: acpiphp: Slot [18] registered Jun 25 16:26:07.160950 kernel: acpiphp: Slot [19] registered Jun 25 16:26:07.160965 kernel: acpiphp: Slot [20] registered Jun 25 16:26:07.160980 kernel: acpiphp: Slot [21] registered Jun 25 16:26:07.160995 kernel: acpiphp: Slot [22] registered Jun 25 16:26:07.161010 kernel: acpiphp: Slot [23] registered Jun 25 16:26:07.161331 kernel: acpiphp: Slot [24] registered Jun 25 16:26:07.161363 kernel: acpiphp: Slot [25] registered Jun 25 16:26:07.161379 kernel: acpiphp: Slot [26] registered Jun 25 16:26:07.161396 kernel: acpiphp: Slot [27] registered Jun 25 16:26:07.161412 kernel: acpiphp: Slot [28] registered Jun 25 16:26:07.161429 kernel: acpiphp: Slot [29] registered Jun 25 16:26:07.161445 kernel: acpiphp: Slot [30] registered Jun 25 16:26:07.161461 kernel: acpiphp: Slot [31] registered Jun 25 16:26:07.161476 kernel: PCI host bridge to bus 0000:00 Jun 25 16:26:07.161729 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:26:07.161889 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:26:07.162287 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:26:07.162436 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 16:26:07.162600 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 16:26:07.162754 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:26:07.162955 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:26:07.165289 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:26:07.165549 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 16:26:07.165752 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jun 25 16:26:07.165938 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:26:07.167265 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:26:07.167479 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:26:07.167612 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:26:07.167741 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jun 25 16:26:07.167856 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jun 25 16:26:07.167977 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 16:26:07.168148 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 16:26:07.168297 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 16:26:07.168458 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 25 16:26:07.168643 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 25 16:26:07.168786 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 25 16:26:07.168931 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jun 25 16:26:07.171200 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 25 16:26:07.171405 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:26:07.171633 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:26:07.171828 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jun 25 16:26:07.171994 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jun 25 16:26:07.173290 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 25 16:26:07.173500 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:26:07.173652 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jun 25 16:26:07.173800 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jun 25 16:26:07.173948 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 25 16:26:07.175320 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jun 25 16:26:07.175532 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jun 25 16:26:07.175688 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jun 25 16:26:07.175863 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 25 16:26:07.183328 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jun 25 16:26:07.183593 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 16:26:07.183749 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jun 25 16:26:07.183917 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 25 16:26:07.184166 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jun 25 16:26:07.184333 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jun 25 16:26:07.184490 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jun 25 16:26:07.184642 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jun 25 16:26:07.184808 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jun 25 16:26:07.184965 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jun 25 16:26:07.185158 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 25 16:26:07.185185 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:26:07.185199 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:26:07.185213 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:26:07.185225 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:26:07.185237 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:26:07.185251 kernel: iommu: Default domain type: Translated Jun 25 16:26:07.185263 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:26:07.185286 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:26:07.185299 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:26:07.185312 kernel: PTP clock support registered Jun 25 16:26:07.185326 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:26:07.185338 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:26:07.185351 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 16:26:07.185363 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jun 25 16:26:07.185562 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 16:26:07.185719 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 16:26:07.185895 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:26:07.185918 kernel: vgaarb: loaded Jun 25 16:26:07.185932 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 16:26:07.185946 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 16:26:07.185959 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:26:07.185974 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:26:07.186019 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:26:07.186074 kernel: pnp: PnP ACPI init Jun 25 16:26:07.186098 kernel: pnp: PnP ACPI: found 4 devices Jun 25 16:26:07.186133 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:26:07.186150 kernel: NET: Registered PF_INET protocol family Jun 25 16:26:07.186164 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:26:07.186178 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 16:26:07.186191 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:26:07.186205 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:26:07.186219 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 16:26:07.186232 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 16:26:07.186253 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:26:07.186267 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:26:07.186281 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:26:07.186295 kernel: NET: Registered PF_XDP protocol family Jun 25 16:26:07.186503 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:26:07.186652 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:26:07.186787 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:26:07.186915 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 16:26:07.187097 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 16:26:07.187280 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 16:26:07.187437 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:26:07.187461 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 16:26:07.187615 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 47230 usecs Jun 25 16:26:07.187638 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:26:07.187653 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:26:07.187667 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jun 25 16:26:07.187681 kernel: Initialise system trusted keyrings Jun 25 16:26:07.187706 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 16:26:07.187721 kernel: Key type asymmetric registered Jun 25 16:26:07.187735 kernel: Asymmetric key parser 'x509' registered Jun 25 16:26:07.187749 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:26:07.187765 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:26:07.187778 kernel: io scheduler mq-deadline registered Jun 25 16:26:07.187791 kernel: io scheduler kyber registered Jun 25 16:26:07.187806 kernel: io scheduler bfq registered Jun 25 16:26:07.187821 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:26:07.187841 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 25 16:26:07.187855 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 16:26:07.187869 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 16:26:07.187883 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:26:07.187896 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:26:07.187910 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:26:07.187923 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:26:07.187936 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:26:07.187950 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:26:07.188272 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 25 16:26:07.188449 kernel: rtc_cmos 00:03: registered as rtc0 Jun 25 16:26:07.188592 kernel: rtc_cmos 00:03: setting system clock to 2024-06-25T16:26:06 UTC (1719332766) Jun 25 16:26:07.188725 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 25 16:26:07.188746 kernel: intel_pstate: CPU model not supported Jun 25 16:26:07.188761 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:26:07.188775 kernel: Segment Routing with IPv6 Jun 25 16:26:07.188789 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:26:07.188822 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:26:07.188843 kernel: Key type dns_resolver registered Jun 25 16:26:07.188862 kernel: IPI shorthand broadcast: enabled Jun 25 16:26:07.188881 kernel: sched_clock: Marking stable (1704689627, 253714005)->(2087337193, -128933561) Jun 25 16:26:07.188899 kernel: registered taskstats version 1 Jun 25 16:26:07.188919 kernel: Loading compiled-in X.509 certificates Jun 25 16:26:07.188938 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:26:07.188956 kernel: Key type .fscrypt registered Jun 25 16:26:07.188975 kernel: Key type fscrypt-provisioning registered Jun 25 16:26:07.188998 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:26:07.189029 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:26:07.189043 kernel: ima: No architecture policies found Jun 25 16:26:07.189187 kernel: clk: Disabling unused clocks Jun 25 16:26:07.189233 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:26:07.189251 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:26:07.189265 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:26:07.189278 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:26:07.189296 kernel: Run /init as init process Jun 25 16:26:07.189311 kernel: with arguments: Jun 25 16:26:07.189326 kernel: /init Jun 25 16:26:07.189340 kernel: with environment: Jun 25 16:26:07.189353 kernel: HOME=/ Jun 25 16:26:07.189366 kernel: TERM=linux Jun 25 16:26:07.189379 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:26:07.189398 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:26:07.189421 systemd[1]: Detected virtualization kvm. Jun 25 16:26:07.189436 systemd[1]: Detected architecture x86-64. Jun 25 16:26:07.189450 systemd[1]: Running in initrd. Jun 25 16:26:07.189463 systemd[1]: No hostname configured, using default hostname. Jun 25 16:26:07.189476 systemd[1]: Hostname set to . Jun 25 16:26:07.189491 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:26:07.189506 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:26:07.189521 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:26:07.189542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:26:07.189556 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:26:07.189569 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:26:07.189582 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:26:07.189599 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:26:07.189613 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:26:07.189632 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:26:07.189646 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:26:07.189665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:26:07.189684 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:26:07.189698 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:26:07.189712 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:26:07.189726 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:26:07.189743 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:26:07.189758 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:26:07.189773 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:26:07.189788 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:26:07.189802 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:26:07.189816 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:26:07.189831 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:26:07.189847 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:26:07.189864 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:26:07.189896 systemd-journald[179]: Journal started Jun 25 16:26:07.190074 systemd-journald[179]: Runtime Journal (/run/log/journal/3dd16c363a66405dabaa47fccdda6fe5) is 4.9M, max 39.3M, 34.4M free. Jun 25 16:26:07.163139 systemd-modules-load[180]: Inserted module 'overlay' Jun 25 16:26:07.214977 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:26:07.215027 kernel: audit: type=1130 audit(1719332767.206:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.216557 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:26:07.238154 kernel: audit: type=1130 audit(1719332767.215:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.238202 kernel: audit: type=1130 audit(1719332767.217:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.238222 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:26:07.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.233700 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:26:07.239889 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:26:07.245210 systemd-modules-load[180]: Inserted module 'br_netfilter' Jun 25 16:26:07.246244 kernel: Bridge firewalling registered Jun 25 16:26:07.256621 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:26:07.272857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:26:07.284622 kernel: audit: type=1130 audit(1719332767.273:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.277202 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:26:07.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.316168 kernel: audit: type=1130 audit(1719332767.290:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.316261 kernel: SCSI subsystem initialized Jun 25 16:26:07.316285 kernel: audit: type=1334 audit(1719332767.291:7): prog-id=6 op=LOAD Jun 25 16:26:07.291000 audit: BPF prog-id=6 op=LOAD Jun 25 16:26:07.319440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:26:07.320660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:26:07.326087 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:26:07.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.342078 kernel: audit: type=1130 audit(1719332767.323:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.354700 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:26:07.354792 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:26:07.362074 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:26:07.368914 dracut-cmdline[200]: dracut-dracut-053 Jun 25 16:26:07.373679 systemd-modules-load[180]: Inserted module 'dm_multipath' Jun 25 16:26:07.376129 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:26:07.387197 kernel: audit: type=1130 audit(1719332767.380:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.380204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:26:07.386112 systemd-resolved[199]: Positive Trust Anchors: Jun 25 16:26:07.386139 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:26:07.386205 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:26:07.390382 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:26:07.402919 systemd-resolved[199]: Defaulting to hostname 'linux'. Jun 25 16:26:07.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.408086 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:26:07.416289 kernel: audit: type=1130 audit(1719332767.408:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.409113 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:26:07.418579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:26:07.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.531093 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:26:07.552083 kernel: iscsi: registered transport (tcp) Jun 25 16:26:07.587201 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:26:07.587306 kernel: QLogic iSCSI HBA Driver Jun 25 16:26:07.678936 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:26:07.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.689478 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:26:07.786224 kernel: raid6: avx2x4 gen() 17392 MB/s Jun 25 16:26:07.804136 kernel: raid6: avx2x2 gen() 18599 MB/s Jun 25 16:26:07.822884 kernel: raid6: avx2x1 gen() 17970 MB/s Jun 25 16:26:07.823014 kernel: raid6: using algorithm avx2x2 gen() 18599 MB/s Jun 25 16:26:07.841144 kernel: raid6: .... xor() 16184 MB/s, rmw enabled Jun 25 16:26:07.841246 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:26:07.848131 kernel: xor: automatically using best checksumming function avx Jun 25 16:26:08.073098 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:26:08.094938 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:26:08.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.096000 audit: BPF prog-id=7 op=LOAD Jun 25 16:26:08.096000 audit: BPF prog-id=8 op=LOAD Jun 25 16:26:08.103562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:26:08.134845 systemd-udevd[381]: Using default interface naming scheme 'v252'. Jun 25 16:26:08.144472 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:26:08.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.157376 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:26:08.192089 dracut-pre-trigger[391]: rd.md=0: removing MD RAID activation Jun 25 16:26:08.262842 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:26:08.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.277508 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:26:08.366145 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:26:08.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:08.474103 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 25 16:26:08.601116 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 25 16:26:08.601366 kernel: scsi host0: Virtio SCSI HBA Jun 25 16:26:08.601579 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:26:08.601615 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:26:08.601636 kernel: GPT:9289727 != 125829119 Jun 25 16:26:08.601654 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:26:08.601672 kernel: GPT:9289727 != 125829119 Jun 25 16:26:08.601689 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:26:08.601708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:26:08.601727 kernel: libata version 3.00 loaded. Jun 25 16:26:08.601746 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 16:26:08.602120 kernel: scsi host1: ata_piix Jun 25 16:26:08.602421 kernel: scsi host2: ata_piix Jun 25 16:26:08.602697 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jun 25 16:26:08.602722 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jun 25 16:26:08.602744 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:26:08.602771 kernel: AES CTR mode by8 optimization enabled Jun 25 16:26:08.602794 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 25 16:26:08.611310 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jun 25 16:26:08.625085 kernel: ACPI: bus type USB registered Jun 25 16:26:08.625182 kernel: usbcore: registered new interface driver usbfs Jun 25 16:26:08.625202 kernel: usbcore: registered new interface driver hub Jun 25 16:26:08.625219 kernel: usbcore: registered new device driver usb Jun 25 16:26:08.799080 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (424) Jun 25 16:26:08.821042 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 16:26:08.830914 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:26:08.841106 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (425) Jun 25 16:26:08.844538 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:26:08.860068 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 25 16:26:08.869353 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 25 16:26:08.869555 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 25 16:26:08.869710 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 25 16:26:08.869856 kernel: hub 1-0:1.0: USB hub found Jun 25 16:26:08.870342 kernel: hub 1-0:1.0: 2 ports detected Jun 25 16:26:08.867573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 16:26:08.870478 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 16:26:08.883013 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:26:08.890994 disk-uuid[513]: Primary Header is updated. Jun 25 16:26:08.890994 disk-uuid[513]: Secondary Entries is updated. Jun 25 16:26:08.890994 disk-uuid[513]: Secondary Header is updated. Jun 25 16:26:08.895542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:26:09.920071 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:26:09.921318 disk-uuid[514]: The operation has completed successfully. Jun 25 16:26:10.013024 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:26:10.014429 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:26:10.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.017845 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 16:26:10.017909 kernel: audit: type=1130 audit(1719332770.015:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.024722 kernel: audit: type=1131 audit(1719332770.015:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.030919 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:26:10.035685 sh[526]: Success Jun 25 16:26:10.061061 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:26:10.129079 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:26:10.150891 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:26:10.153504 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:26:10.161867 kernel: audit: type=1130 audit(1719332770.154:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.180125 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:26:10.180254 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:26:10.183300 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:26:10.185529 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:26:10.187446 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:26:10.212099 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:26:10.213327 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:26:10.221008 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:26:10.223884 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:26:10.248381 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:10.248488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:26:10.248517 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:26:10.274478 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:26:10.277302 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:10.289573 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:26:10.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.298245 kernel: audit: type=1130 audit(1719332770.291:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.298773 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:26:10.438600 ignition[625]: Ignition 2.15.0 Jun 25 16:26:10.439227 ignition[625]: Stage: fetch-offline Jun 25 16:26:10.439340 ignition[625]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:10.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.441842 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:26:10.450417 kernel: audit: type=1130 audit(1719332770.442:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.439364 ignition[625]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:26:10.439542 ignition[625]: parsed url from cmdline: "" Jun 25 16:26:10.439549 ignition[625]: no config URL provided Jun 25 16:26:10.439559 ignition[625]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:26:10.439574 ignition[625]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:26:10.439584 ignition[625]: failed to fetch config: resource requires networking Jun 25 16:26:10.439826 ignition[625]: Ignition finished successfully Jun 25 16:26:10.507595 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:26:10.516737 kernel: audit: type=1130 audit(1719332770.508:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.516785 kernel: audit: type=1334 audit(1719332770.510:25): prog-id=9 op=LOAD Jun 25 16:26:10.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.510000 audit: BPF prog-id=9 op=LOAD Jun 25 16:26:10.519199 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:26:10.552089 systemd-networkd[711]: lo: Link UP Jun 25 16:26:10.552097 systemd-networkd[711]: lo: Gained carrier Jun 25 16:26:10.560522 kernel: audit: type=1130 audit(1719332770.554:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.552894 systemd-networkd[711]: Enumeration completed Jun 25 16:26:10.553144 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:26:10.553722 systemd-networkd[711]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:26:10.553728 systemd-networkd[711]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:26:10.554838 systemd[1]: Reached target network.target - Network. Jun 25 16:26:10.563447 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 25 16:26:10.563453 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 25 16:26:10.564547 systemd-networkd[711]: eth1: Link UP Jun 25 16:26:10.564554 systemd-networkd[711]: eth1: Gained carrier Jun 25 16:26:10.564566 systemd-networkd[711]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:26:10.568493 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 16:26:10.575503 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:26:10.584953 systemd-networkd[711]: eth0: Link UP Jun 25 16:26:10.584958 systemd-networkd[711]: eth0: Gained carrier Jun 25 16:26:10.584975 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 25 16:26:10.599719 ignition[713]: Ignition 2.15.0 Jun 25 16:26:10.599732 ignition[713]: Stage: fetch Jun 25 16:26:10.600866 systemd-networkd[711]: eth1: DHCPv4 address 10.124.0.2/20 acquired from 169.254.169.253 Jun 25 16:26:10.599922 ignition[713]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:10.599938 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:26:10.600104 ignition[713]: parsed url from cmdline: "" Jun 25 16:26:10.600112 ignition[713]: no config URL provided Jun 25 16:26:10.600121 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:26:10.600136 ignition[713]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:26:10.600173 ignition[713]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 25 16:26:10.600473 ignition[713]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 25 16:26:10.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.609521 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:26:10.616601 kernel: audit: type=1130 audit(1719332770.610:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.612212 systemd-networkd[711]: eth0: DHCPv4 address 161.35.235.79/20, gateway 161.35.224.1 acquired from 169.254.169.253 Jun 25 16:26:10.618469 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:26:10.624056 iscsid[722]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:26:10.624056 iscsid[722]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:26:10.624056 iscsid[722]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:26:10.624056 iscsid[722]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:26:10.624056 iscsid[722]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:26:10.638941 kernel: audit: type=1130 audit(1719332770.629:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.639134 iscsid[722]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:26:10.627871 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:26:10.641202 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:26:10.667175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:26:10.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.668462 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:26:10.670195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:26:10.672247 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:26:10.681659 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:26:10.700898 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:26:10.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.800836 ignition[713]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Jun 25 16:26:10.823307 ignition[713]: GET result: OK Jun 25 16:26:10.823547 ignition[713]: parsing config with SHA512: 23c45e1a1da3b0e508344c78b9f3a66d95a11aa5801ea523c7eca2e22f5be858a1e116b459e3295b1d2ee132f870606ba97cd548c357cd602d9a70a198c2ec86 Jun 25 16:26:10.830948 unknown[713]: fetched base config from "system" Jun 25 16:26:10.830965 unknown[713]: fetched base config from "system" Jun 25 16:26:10.834757 ignition[713]: fetch: fetch complete Jun 25 16:26:10.830975 unknown[713]: fetched user config from "digitalocean" Jun 25 16:26:10.834769 ignition[713]: fetch: fetch passed Jun 25 16:26:10.837342 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 16:26:10.834891 ignition[713]: Ignition finished successfully Jun 25 16:26:10.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.864333 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:26:10.886547 ignition[736]: Ignition 2.15.0 Jun 25 16:26:10.886565 ignition[736]: Stage: kargs Jun 25 16:26:10.886792 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:10.886812 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:26:10.888533 ignition[736]: kargs: kargs passed Jun 25 16:26:10.891168 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:26:10.888626 ignition[736]: Ignition finished successfully Jun 25 16:26:10.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.901057 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:26:10.923885 ignition[742]: Ignition 2.15.0 Jun 25 16:26:10.923900 ignition[742]: Stage: disks Jun 25 16:26:10.924117 ignition[742]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:10.924137 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:26:10.928621 ignition[742]: disks: disks passed Jun 25 16:26:10.929361 ignition[742]: Ignition finished successfully Jun 25 16:26:10.931721 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:26:10.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.932739 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:26:10.933832 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:26:10.935608 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:26:10.937691 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:26:10.939323 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:26:10.952441 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:26:10.979299 systemd-fsck[750]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 16:26:10.985627 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:26:10.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.995475 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:26:11.147065 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:26:11.148739 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:26:11.151056 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:26:11.161681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:26:11.167507 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:26:11.176614 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jun 25 16:26:11.189834 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (756) Jun 25 16:26:11.189905 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:11.190153 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:26:11.190185 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:26:11.200734 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 16:26:11.203861 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:26:11.203924 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:26:11.212797 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:26:11.214801 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:26:11.232074 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:26:11.331781 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:26:11.351815 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:26:11.374500 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:26:11.383729 coreos-metadata[758]: Jun 25 16:26:11.383 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 25 16:26:11.387173 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:26:11.402239 coreos-metadata[758]: Jun 25 16:26:11.402 INFO Fetch successful Jun 25 16:26:11.414455 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jun 25 16:26:11.414630 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jun 25 16:26:11.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.421279 coreos-metadata[776]: Jun 25 16:26:11.421 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 25 16:26:11.442264 coreos-metadata[776]: Jun 25 16:26:11.442 INFO Fetch successful Jun 25 16:26:11.454854 coreos-metadata[776]: Jun 25 16:26:11.454 INFO wrote hostname ci-3815.2.4-0-d0607f9d2c to /sysroot/etc/hostname Jun 25 16:26:11.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.458537 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 16:26:11.595726 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:26:11.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.607393 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:26:11.611115 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:26:11.630169 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:11.631617 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:26:11.658599 systemd-networkd[711]: eth0: Gained IPv6LL Jun 25 16:26:11.681711 ignition[874]: INFO : Ignition 2.15.0 Jun 25 16:26:11.683283 ignition[874]: INFO : Stage: mount Jun 25 16:26:11.684557 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:11.685709 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:26:11.689297 ignition[874]: INFO : mount: mount passed Jun 25 16:26:11.690565 ignition[874]: INFO : Ignition finished successfully Jun 25 16:26:11.693600 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:26:11.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.700842 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:26:11.709245 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:26:11.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.723233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:26:11.740099 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (883) Jun 25 16:26:11.745097 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:26:11.745244 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:26:11.748184 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:26:11.759236 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:26:11.789705 ignition[901]: INFO : Ignition 2.15.0 Jun 25 16:26:11.789705 ignition[901]: INFO : Stage: files Jun 25 16:26:11.792129 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:11.792129 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:26:11.792129 ignition[901]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:26:11.796921 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:26:11.796921 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:26:11.802664 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:26:11.804295 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:26:11.805786 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:26:11.805432 unknown[901]: wrote ssh authorized keys file for user: core Jun 25 16:26:11.809664 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:26:11.811594 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:26:11.857317 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:26:11.939325 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:26:11.939325 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:26:11.943776 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 16:26:12.234815 systemd-networkd[711]: eth1: Gained IPv6LL Jun 25 16:26:12.293503 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:26:12.775729 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:26:12.775729 ignition[901]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:26:12.778836 ignition[901]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:26:12.778836 ignition[901]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:26:12.778836 ignition[901]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:26:12.778836 ignition[901]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:26:12.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.784531 ignition[901]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:26:12.784531 ignition[901]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:26:12.784531 ignition[901]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:26:12.784531 ignition[901]: INFO : files: files passed Jun 25 16:26:12.784531 ignition[901]: INFO : Ignition finished successfully Jun 25 16:26:12.781280 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:26:12.790150 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:26:12.795158 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:26:12.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.797541 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:26:12.797749 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:26:12.817347 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:26:12.817347 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:26:12.822923 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:26:12.826190 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:26:12.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.827450 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:26:12.839753 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:26:12.882705 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:26:12.882917 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:26:12.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.885220 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:26:12.887533 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:26:12.889639 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:26:12.891888 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:26:12.927656 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:26:12.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.938567 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:26:12.958380 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:26:12.960869 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:26:12.962364 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:26:12.964531 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:26:12.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.964801 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:26:12.967004 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:26:12.968703 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:26:12.970579 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:26:12.972279 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:26:12.974166 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:26:12.976027 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:26:12.978105 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:26:12.983582 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:26:12.985540 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:26:12.987658 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:26:12.989645 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:26:12.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.991233 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:26:12.991534 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:26:12.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.993329 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:26:12.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.994672 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:26:12.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.994961 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:26:13.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.996493 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:26:12.996790 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:26:12.998317 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:26:12.998572 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:26:12.999924 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 16:26:13.020206 iscsid[722]: iscsid shutting down. Jun 25 16:26:13.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.000248 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 16:26:13.006908 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:26:13.015422 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:26:13.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.021584 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:26:13.022654 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:26:13.022984 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:26:13.024232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:26:13.147725 ignition[945]: INFO : Ignition 2.15.0 Jun 25 16:26:13.147725 ignition[945]: INFO : Stage: umount Jun 25 16:26:13.147725 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:26:13.147725 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:26:13.147725 ignition[945]: INFO : umount: umount passed Jun 25 16:26:13.147725 ignition[945]: INFO : Ignition finished successfully Jun 25 16:26:13.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.024445 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:26:13.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.032945 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:26:13.033200 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:26:13.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.039513 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:26:13.042744 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:26:13.054835 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:26:13.150662 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:26:13.151601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:26:13.151769 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:26:13.154285 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:26:13.154473 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:26:13.169340 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:26:13.169484 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:26:13.170631 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:26:13.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.170731 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:26:13.171765 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 16:26:13.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.171863 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 16:26:13.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.174510 systemd[1]: Stopped target network.target - Network. Jun 25 16:26:13.176209 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:26:13.176339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:26:13.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.178158 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:26:13.179413 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:26:13.181830 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:26:13.183655 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:26:13.185135 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:26:13.187192 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:26:13.187279 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:26:13.190300 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:26:13.190412 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:26:13.191883 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:26:13.192018 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:26:13.193496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:26:13.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.195297 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:26:13.196598 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:26:13.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.196790 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:26:13.198106 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:26:13.198252 systemd-networkd[711]: eth1: DHCPv6 lease lost Jun 25 16:26:13.198664 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:26:13.201241 systemd-networkd[711]: eth0: DHCPv6 lease lost Jun 25 16:26:13.257000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:26:13.203502 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:26:13.203693 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:26:13.205108 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:26:13.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.207780 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:26:13.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.264000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:26:13.228502 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:26:13.231558 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:26:13.231713 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:26:13.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.232699 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:26:13.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.232802 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:26:13.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.233662 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:26:13.233733 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:26:13.238550 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:26:13.241357 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:26:13.242419 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:26:13.242633 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:26:13.261161 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:26:13.261449 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:26:13.263472 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:26:13.263640 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:26:13.264904 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:26:13.264986 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:26:13.266460 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:26:13.266547 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:26:13.268022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:26:13.268190 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:26:13.270214 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:26:13.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.270346 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:26:13.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.272169 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:26:13.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.272265 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:26:13.286392 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:26:13.292534 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:26:13.292694 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:26:13.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.295097 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:26:13.295197 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:26:13.296361 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:26:13.296424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:26:13.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.298276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:26:13.298374 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:26:13.305696 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:26:13.305849 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:26:13.307300 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:26:13.307510 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:26:13.308862 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:26:13.322418 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:26:13.332456 systemd[1]: Switching root. Jun 25 16:26:13.362795 systemd-journald[179]: Journal stopped Jun 25 16:26:15.204555 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jun 25 16:26:15.204653 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:26:15.204683 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:26:15.204702 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:26:15.204746 kernel: SELinux: policy capability open_perms=1 Jun 25 16:26:15.204765 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:26:15.204784 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:26:15.204802 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:26:15.204821 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:26:15.204850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:26:15.204869 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:26:15.204888 systemd[1]: Successfully loaded SELinux policy in 89.773ms. Jun 25 16:26:15.204917 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.210ms. Jun 25 16:26:15.204944 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:26:15.204971 systemd[1]: Detected virtualization kvm. Jun 25 16:26:15.204990 systemd[1]: Detected architecture x86-64. Jun 25 16:26:15.205014 systemd[1]: Detected first boot. Jun 25 16:26:15.205050 systemd[1]: Hostname set to . Jun 25 16:26:15.205075 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:26:15.205095 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:26:15.205115 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:26:15.205135 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:26:15.205154 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:26:15.205174 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:26:15.205200 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:26:15.205234 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:26:15.205253 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:26:15.205274 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:26:15.205294 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:26:15.205313 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:26:15.205332 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:26:15.205352 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:26:15.205371 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:26:15.205391 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:26:15.205415 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:26:15.205434 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:26:15.205453 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:26:15.205476 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:26:15.205494 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:26:15.205514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:26:15.205534 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:26:15.205558 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:26:15.205577 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:26:15.205597 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:26:15.205615 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:26:15.205636 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:26:15.205662 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:26:15.205681 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:26:15.205700 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:26:15.205724 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:26:15.205743 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:26:15.205762 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:26:15.205781 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:26:15.205799 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:15.205819 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:26:15.205839 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:26:15.205873 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:26:15.205899 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:26:15.205920 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:26:15.205939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:26:15.205959 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:26:15.205979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:26:15.206005 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:26:15.206024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:26:15.206057 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:26:15.206077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:26:15.206101 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:26:15.206147 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:26:15.206166 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:26:15.206186 kernel: kauditd_printk_skb: 72 callbacks suppressed Jun 25 16:26:15.206208 kernel: audit: type=1131 audit(1719332775.100:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.206229 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:26:15.206248 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:26:15.206267 kernel: fuse: init (API version 7.37) Jun 25 16:26:15.206291 kernel: audit: type=1131 audit(1719332775.118:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.206310 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:26:15.206329 kernel: audit: type=1130 audit(1719332775.128:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.206347 kernel: audit: type=1131 audit(1719332775.128:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.206365 kernel: audit: type=1334 audit(1719332775.130:105): prog-id=15 op=LOAD Jun 25 16:26:15.206383 kernel: audit: type=1334 audit(1719332775.149:106): prog-id=16 op=LOAD Jun 25 16:26:15.206402 kernel: audit: type=1334 audit(1719332775.151:107): prog-id=17 op=LOAD Jun 25 16:26:15.206424 kernel: audit: type=1334 audit(1719332775.152:108): prog-id=13 op=UNLOAD Jun 25 16:26:15.206442 kernel: audit: type=1334 audit(1719332775.152:109): prog-id=14 op=UNLOAD Jun 25 16:26:15.206463 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:26:15.206482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:26:15.206500 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:26:15.206520 kernel: loop: module loaded Jun 25 16:26:15.206537 kernel: audit: type=1305 audit(1719332775.192:110): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:26:15.206564 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:26:15.206591 systemd-journald[1046]: Journal started Jun 25 16:26:15.206670 systemd-journald[1046]: Runtime Journal (/run/log/journal/3dd16c363a66405dabaa47fccdda6fe5) is 4.9M, max 39.3M, 34.4M free. Jun 25 16:26:13.572000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:26:13.728000 audit: BPF prog-id=10 op=LOAD Jun 25 16:26:13.728000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:26:13.728000 audit: BPF prog-id=11 op=LOAD Jun 25 16:26:13.728000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:26:14.858000 audit: BPF prog-id=12 op=LOAD Jun 25 16:26:14.858000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:26:14.858000 audit: BPF prog-id=13 op=LOAD Jun 25 16:26:14.858000 audit: BPF prog-id=14 op=LOAD Jun 25 16:26:14.858000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:26:14.858000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:26:14.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:14.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:14.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:14.868000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:26:15.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.130000 audit: BPF prog-id=15 op=LOAD Jun 25 16:26:15.149000 audit: BPF prog-id=16 op=LOAD Jun 25 16:26:15.151000 audit: BPF prog-id=17 op=LOAD Jun 25 16:26:15.152000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:26:15.152000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:26:15.192000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:26:15.192000 audit[1046]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffddc109090 a2=4000 a3=7ffddc10912c items=0 ppid=1 pid=1046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:15.192000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:26:14.847761 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:26:14.847784 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 16:26:14.860314 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:26:15.228374 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:26:15.228552 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:26:15.228588 systemd[1]: Stopped verity-setup.service. Jun 25 16:26:15.228616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:15.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.232073 kernel: ACPI: bus type drm_connector registered Jun 25 16:26:15.242166 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:26:15.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.243472 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:26:15.245540 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:26:15.246714 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:26:15.247596 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:26:15.248463 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:26:15.250342 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:26:15.251565 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:26:15.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.253342 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:26:15.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.253603 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:26:15.254953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:26:15.256251 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:26:15.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.257726 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:26:15.257964 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:26:15.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.259280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:26:15.259474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:26:15.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.260649 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:26:15.261266 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:26:15.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.262533 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:26:15.263254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:26:15.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.265177 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:26:15.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.267106 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:26:15.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.268494 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:26:15.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.272337 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:26:15.282379 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:26:15.287538 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:26:15.288576 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:26:15.298420 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:26:15.302596 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:26:15.303654 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:26:15.310615 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:26:15.311890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:26:15.320636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:26:15.324093 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:26:15.329563 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:26:15.349727 systemd-journald[1046]: Time spent on flushing to /var/log/journal/3dd16c363a66405dabaa47fccdda6fe5 is 68.298ms for 1128 entries. Jun 25 16:26:15.349727 systemd-journald[1046]: System Journal (/var/log/journal/3dd16c363a66405dabaa47fccdda6fe5) is 8.0M, max 195.6M, 187.6M free. Jun 25 16:26:15.434460 systemd-journald[1046]: Received client request to flush runtime journal. Jun 25 16:26:15.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.353698 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:26:15.362398 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:26:15.363520 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:26:15.376743 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:26:15.384440 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:26:15.417554 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:26:15.424496 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:26:15.436389 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:26:15.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.453796 udevadm[1078]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:26:15.466183 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:26:15.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:15.474572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:26:15.514013 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:26:15.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.537715 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:26:16.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.539000 audit: BPF prog-id=18 op=LOAD Jun 25 16:26:16.539000 audit: BPF prog-id=19 op=LOAD Jun 25 16:26:16.539000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:26:16.539000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:26:16.546822 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:26:16.588341 systemd-udevd[1082]: Using default interface naming scheme 'v252'. Jun 25 16:26:16.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.635000 audit: BPF prog-id=20 op=LOAD Jun 25 16:26:16.631006 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:26:16.642179 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:26:16.651000 audit: BPF prog-id=21 op=LOAD Jun 25 16:26:16.651000 audit: BPF prog-id=22 op=LOAD Jun 25 16:26:16.651000 audit: BPF prog-id=23 op=LOAD Jun 25 16:26:16.657399 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:26:16.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.752521 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:26:16.776080 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1092) Jun 25 16:26:16.811348 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:26:16.843171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:16.843697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:26:16.853378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:26:16.856762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:26:16.862163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:26:16.863948 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:26:16.864105 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:26:16.864228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:16.865023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:26:16.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.867403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:26:16.868609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:26:16.868827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:26:16.871510 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:26:16.871701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:26:16.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.874524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:26:16.874588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:26:16.890079 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1086) Jun 25 16:26:16.939513 systemd-networkd[1091]: lo: Link UP Jun 25 16:26:16.939527 systemd-networkd[1091]: lo: Gained carrier Jun 25 16:26:16.942923 systemd-networkd[1091]: Enumeration completed Jun 25 16:26:16.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:16.943165 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:26:16.943614 systemd-networkd[1091]: eth1: Configuring with /run/systemd/network/10-d2:36:c7:d1:98:83.network. Jun 25 16:26:16.945981 systemd-networkd[1091]: eth0: Configuring with /run/systemd/network/10-96:b1:e6:b1:23:c7.network. Jun 25 16:26:16.947840 systemd-networkd[1091]: eth1: Link UP Jun 25 16:26:16.947852 systemd-networkd[1091]: eth1: Gained carrier Jun 25 16:26:16.950380 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:26:16.951539 systemd-networkd[1091]: eth0: Link UP Jun 25 16:26:16.951546 systemd-networkd[1091]: eth0: Gained carrier Jun 25 16:26:17.007020 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:26:17.027061 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 16:26:17.052171 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:26:17.061066 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:26:17.065100 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:26:17.110231 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:26:17.148072 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 25 16:26:17.148202 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 25 16:26:17.224403 kernel: Console: switching to colour dummy device 80x25 Jun 25 16:26:17.228404 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 25 16:26:17.228608 kernel: [drm] features: -context_init Jun 25 16:26:17.231080 kernel: [drm] number of scanouts: 1 Jun 25 16:26:17.231225 kernel: [drm] number of cap sets: 0 Jun 25 16:26:17.243074 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 25 16:26:17.249730 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 25 16:26:17.250110 kernel: virtio-pci 0000:00:02.0: [drm] drm_plane_enable_fb_damage_clips() not called Jun 25 16:26:17.250465 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 16:26:17.271043 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 25 16:26:17.394063 kernel: EDAC MC: Ver: 3.0.0 Jun 25 16:26:17.424695 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:26:17.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:17.434584 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:26:17.451013 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:26:17.484573 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:26:17.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:17.485100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:26:17.490722 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:26:17.498613 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:26:17.534242 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:26:17.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:17.535081 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:26:17.545428 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 25 16:26:17.546055 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:26:17.546169 systemd[1]: Reached target machines.target - Containers. Jun 25 16:26:17.550190 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:26:17.603592 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:26:17.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:17.613113 kernel: ISO 9660 Extensions: RRIP_1991A Jun 25 16:26:17.616347 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 25 16:26:17.616767 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:26:17.623500 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:26:17.625378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:26:17.626617 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:17.630557 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:26:17.641605 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:26:17.654659 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:26:17.665069 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) Jun 25 16:26:17.753644 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:26:17.835312 kernel: loop0: detected capacity change from 0 to 139360 Jun 25 16:26:17.872717 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:26:17.874465 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:26:17.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:17.977980 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:26:18.007522 kernel: loop1: detected capacity change from 0 to 210664 Jun 25 16:26:18.087894 kernel: loop2: detected capacity change from 0 to 80584 Jun 25 16:26:18.103967 systemd-fsck[1136]: fsck.fat 4.2 (2021-01-31) Jun 25 16:26:18.103967 systemd-fsck[1136]: /dev/vda1: 808 files, 120378/258078 clusters Jun 25 16:26:18.107917 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:26:18.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:18.120581 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:26:18.199125 kernel: loop3: detected capacity change from 0 to 8 Jun 25 16:26:18.239147 kernel: loop4: detected capacity change from 0 to 139360 Jun 25 16:26:18.304187 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:26:18.351836 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:26:18.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:18.362554 kernel: loop5: detected capacity change from 0 to 210664 Jun 25 16:26:18.445130 kernel: loop6: detected capacity change from 0 to 80584 Jun 25 16:26:18.590180 kernel: loop7: detected capacity change from 0 to 8 Jun 25 16:26:18.594842 (sd-sysext)[1143]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 25 16:26:18.597133 (sd-sysext)[1143]: Merged extensions into '/usr'. Jun 25 16:26:18.601773 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:26:18.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:18.622905 systemd[1]: Starting ensure-sysext.service... Jun 25 16:26:18.635628 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:26:18.671530 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:26:18.678518 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:26:18.679702 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:26:18.682534 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:26:18.685444 systemd[1]: Reloading. Jun 25 16:26:18.763383 systemd-networkd[1091]: eth1: Gained IPv6LL Jun 25 16:26:18.955127 systemd-networkd[1091]: eth0: Gained IPv6LL Jun 25 16:26:19.100770 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:26:19.177116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:26:19.320000 audit: BPF prog-id=24 op=LOAD Jun 25 16:26:19.320000 audit: BPF prog-id=25 op=LOAD Jun 25 16:26:19.320000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:26:19.320000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:26:19.326000 audit: BPF prog-id=26 op=LOAD Jun 25 16:26:19.326000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:26:19.326000 audit: BPF prog-id=27 op=LOAD Jun 25 16:26:19.326000 audit: BPF prog-id=28 op=LOAD Jun 25 16:26:19.326000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:26:19.326000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:26:19.327000 audit: BPF prog-id=29 op=LOAD Jun 25 16:26:19.327000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:26:19.328000 audit: BPF prog-id=30 op=LOAD Jun 25 16:26:19.328000 audit: BPF prog-id=31 op=LOAD Jun 25 16:26:19.328000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:26:19.328000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:26:19.331000 audit: BPF prog-id=32 op=LOAD Jun 25 16:26:19.331000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:26:19.342832 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:26:19.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.349994 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:26:19.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.358892 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:26:19.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.384004 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:26:19.392367 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:26:19.400008 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:26:19.407000 audit: BPF prog-id=33 op=LOAD Jun 25 16:26:19.422000 audit: BPF prog-id=34 op=LOAD Jun 25 16:26:19.417800 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:26:19.430067 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:26:19.442911 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:26:19.455000 audit[1221]: SYSTEM_BOOT pid=1221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.469117 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:19.469624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:26:19.475886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:26:19.491804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:26:19.501804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:26:19.503050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:26:19.503457 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:19.505429 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:19.513059 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:26:19.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.516461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:26:19.516851 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:26:19.528476 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:19.528897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:26:19.540849 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:26:19.542201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:26:19.542570 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:19.542858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:19.546347 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:26:19.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.554742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:26:19.555100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:26:19.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.561308 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:26:19.561561 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:26:19.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.571881 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:26:19.582952 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:26:19.588132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:26:19.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.590130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:26:19.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.605825 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:19.606638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:26:19.614000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:26:19.614000 audit[1232]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffffd1d9af0 a2=420 a3=0 items=0 ppid=1208 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:19.614000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:26:19.615811 augenrules[1232]: No rules Jun 25 16:26:19.616806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:26:19.622523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:26:19.636016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:26:19.647154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:26:19.648633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:26:19.649103 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:19.649436 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:26:19.654144 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:26:19.668007 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:26:19.681741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:26:19.682140 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:26:19.690431 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:26:19.697986 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:26:19.701275 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:26:19.709693 systemd[1]: Finished ensure-sysext.service. Jun 25 16:26:19.712331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:26:19.712554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:26:19.721215 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:26:19.721299 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:26:19.722298 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:26:19.722598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:26:19.734302 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:26:19.753184 systemd-resolved[1212]: Positive Trust Anchors: Jun 25 16:26:19.753706 systemd-resolved[1212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:26:19.753891 systemd-resolved[1212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:26:19.762640 systemd-resolved[1212]: Using system hostname 'ci-3815.2.4-0-d0607f9d2c'. Jun 25 16:26:19.767872 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:26:19.768833 systemd[1]: Reached target network.target - Network. Jun 25 16:26:19.769580 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:26:19.770315 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:26:19.780764 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:26:19.781733 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:26:19.782767 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:26:19.783659 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:26:19.784418 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:26:19.785136 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:26:19.785187 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:26:19.785847 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:26:19.788479 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:26:19.790720 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:26:19.791477 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:26:19.793423 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:26:19.800387 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:26:19.808880 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:26:19.812299 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:19.812584 systemd-timesyncd[1218]: Contacted time server 155.248.196.28:123 (0.flatcar.pool.ntp.org). Jun 25 16:26:19.813494 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:26:19.814704 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:26:19.815444 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:26:19.818626 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:26:19.818676 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:26:19.819769 systemd-timesyncd[1218]: Initial clock synchronization to Tue 2024-06-25 16:26:20.130652 UTC. Jun 25 16:26:19.826501 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:26:19.849619 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 16:26:19.855434 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:26:19.869374 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:26:19.878800 jq[1250]: false Jun 25 16:26:19.883667 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:26:19.887857 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:26:19.898256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:19.913449 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:26:19.918220 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:26:19.930242 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:26:19.943464 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:26:19.950685 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:26:19.968982 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:26:19.971971 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:26:19.973368 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:26:19.974765 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:26:19.983822 dbus-daemon[1247]: [system] SELinux support is enabled Jun 25 16:26:19.985521 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:26:19.989901 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:26:19.996831 jq[1268]: true Jun 25 16:26:20.000089 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:26:20.019149 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:26:20.019619 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:26:20.025311 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:26:20.025622 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:26:20.048384 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:26:20.048874 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:26:20.053571 extend-filesystems[1251]: Found loop4 Jun 25 16:26:20.056548 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:26:20.056665 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:26:20.064124 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:26:20.064537 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 25 16:26:20.064603 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:26:20.082952 extend-filesystems[1251]: Found loop5 Jun 25 16:26:20.082952 extend-filesystems[1251]: Found loop6 Jun 25 16:26:20.082952 extend-filesystems[1251]: Found loop7 Jun 25 16:26:20.082952 extend-filesystems[1251]: Found vda Jun 25 16:26:20.082952 extend-filesystems[1251]: Found vda1 Jun 25 16:26:20.082952 extend-filesystems[1251]: Found vda2 Jun 25 16:26:20.082952 extend-filesystems[1251]: Found vda3 Jun 25 16:26:20.082952 extend-filesystems[1251]: Found usr Jun 25 16:26:20.082952 extend-filesystems[1251]: Found vda4 Jun 25 16:26:20.082952 extend-filesystems[1251]: Found vda6 Jun 25 16:26:20.082952 extend-filesystems[1251]: Found vda7 Jun 25 16:26:20.082952 extend-filesystems[1251]: Found vda9 Jun 25 16:26:20.082952 extend-filesystems[1251]: Checking size of /dev/vda9 Jun 25 16:26:20.191692 tar[1273]: linux-amd64/helm Jun 25 16:26:20.192153 update_engine[1267]: I0625 16:26:20.181473 1267 main.cc:92] Flatcar Update Engine starting Jun 25 16:26:20.192459 jq[1275]: true Jun 25 16:26:20.194782 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:26:20.200245 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:26:20.207515 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:26:20.214840 update_engine[1267]: I0625 16:26:20.214756 1267 update_check_scheduler.cc:74] Next update check in 3m33s Jun 25 16:26:20.274548 bash[1301]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:26:20.278498 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:26:20.285446 extend-filesystems[1251]: Resized partition /dev/vda9 Jun 25 16:26:20.290446 systemd[1]: Starting sshkeys.service... Jun 25 16:26:20.318875 coreos-metadata[1246]: Jun 25 16:26:20.255 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 25 16:26:20.319747 extend-filesystems[1305]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:26:20.369800 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 25 16:26:20.345982 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 16:26:20.370762 coreos-metadata[1246]: Jun 25 16:26:20.370 INFO Fetch successful Jun 25 16:26:20.373864 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 16:26:20.501925 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 16:26:20.504334 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:26:20.583603 systemd-logind[1266]: New seat seat0. Jun 25 16:26:20.588900 systemd-logind[1266]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:26:20.589880 systemd-logind[1266]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:26:20.593313 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:26:20.623269 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1312) Jun 25 16:26:20.673244 locksmithd[1294]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:26:20.686096 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 25 16:26:20.724753 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:26:20.745923 extend-filesystems[1305]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 16:26:20.745923 extend-filesystems[1305]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 25 16:26:20.745923 extend-filesystems[1305]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 25 16:26:20.800550 extend-filesystems[1251]: Resized filesystem in /dev/vda9 Jun 25 16:26:20.800550 extend-filesystems[1251]: Found vdb Jun 25 16:26:20.749598 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:26:20.749914 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:26:20.834235 coreos-metadata[1306]: Jun 25 16:26:20.834 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 25 16:26:20.854281 coreos-metadata[1306]: Jun 25 16:26:20.854 INFO Fetch successful Jun 25 16:26:20.876400 unknown[1306]: wrote ssh authorized keys file for user: core Jun 25 16:26:20.903737 update-ssh-keys[1336]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:26:20.904771 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 16:26:20.909599 systemd[1]: Finished sshkeys.service. Jun 25 16:26:21.320203 containerd[1277]: time="2024-06-25T16:26:21.320037974Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:26:21.427951 containerd[1277]: time="2024-06-25T16:26:21.427820054Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:26:21.429174 containerd[1277]: time="2024-06-25T16:26:21.429133038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:21.431904 containerd[1277]: time="2024-06-25T16:26:21.431836951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:26:21.432135 containerd[1277]: time="2024-06-25T16:26:21.432106656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:21.432657 containerd[1277]: time="2024-06-25T16:26:21.432623989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:26:21.432803 containerd[1277]: time="2024-06-25T16:26:21.432781491Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:26:21.433025 containerd[1277]: time="2024-06-25T16:26:21.433004256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:21.433233 containerd[1277]: time="2024-06-25T16:26:21.433207733Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:26:21.433330 containerd[1277]: time="2024-06-25T16:26:21.433312747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:21.433516 containerd[1277]: time="2024-06-25T16:26:21.433491266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:21.433973 containerd[1277]: time="2024-06-25T16:26:21.433942807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:21.434143 containerd[1277]: time="2024-06-25T16:26:21.434120342Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:26:21.434227 containerd[1277]: time="2024-06-25T16:26:21.434209880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:26:21.434582 containerd[1277]: time="2024-06-25T16:26:21.434551694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:26:21.434793 containerd[1277]: time="2024-06-25T16:26:21.434768577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:26:21.434988 containerd[1277]: time="2024-06-25T16:26:21.434965720Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:26:21.435105 containerd[1277]: time="2024-06-25T16:26:21.435061425Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:26:21.442039 containerd[1277]: time="2024-06-25T16:26:21.441976431Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:26:21.442351 containerd[1277]: time="2024-06-25T16:26:21.442316686Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:26:21.442458 containerd[1277]: time="2024-06-25T16:26:21.442440622Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:26:21.442634 containerd[1277]: time="2024-06-25T16:26:21.442608521Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:26:21.442758 containerd[1277]: time="2024-06-25T16:26:21.442741927Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:26:21.442851 containerd[1277]: time="2024-06-25T16:26:21.442836832Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:26:21.442953 containerd[1277]: time="2024-06-25T16:26:21.442937317Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:26:21.443380 containerd[1277]: time="2024-06-25T16:26:21.443357046Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:26:21.443488 containerd[1277]: time="2024-06-25T16:26:21.443471432Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:26:21.443587 containerd[1277]: time="2024-06-25T16:26:21.443570746Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:26:21.443700 containerd[1277]: time="2024-06-25T16:26:21.443684357Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:26:21.443803 containerd[1277]: time="2024-06-25T16:26:21.443786665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:26:21.443928 containerd[1277]: time="2024-06-25T16:26:21.443911744Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:26:21.444024 containerd[1277]: time="2024-06-25T16:26:21.444007925Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:26:21.444137 containerd[1277]: time="2024-06-25T16:26:21.444120672Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:26:21.444235 containerd[1277]: time="2024-06-25T16:26:21.444219995Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:26:21.444341 containerd[1277]: time="2024-06-25T16:26:21.444325605Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:26:21.444440 containerd[1277]: time="2024-06-25T16:26:21.444424726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:26:21.444533 containerd[1277]: time="2024-06-25T16:26:21.444517972Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:26:21.444806 containerd[1277]: time="2024-06-25T16:26:21.444784807Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:26:21.445319 containerd[1277]: time="2024-06-25T16:26:21.445295700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:26:21.445504 containerd[1277]: time="2024-06-25T16:26:21.445485201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.445626 containerd[1277]: time="2024-06-25T16:26:21.445609923Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:26:21.445742 containerd[1277]: time="2024-06-25T16:26:21.445726890Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:26:21.446000 containerd[1277]: time="2024-06-25T16:26:21.445977770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.446115 containerd[1277]: time="2024-06-25T16:26:21.446097934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.446212 containerd[1277]: time="2024-06-25T16:26:21.446196669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.446289 containerd[1277]: time="2024-06-25T16:26:21.446273930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.446366 containerd[1277]: time="2024-06-25T16:26:21.446351212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.446454 containerd[1277]: time="2024-06-25T16:26:21.446439643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.446549 containerd[1277]: time="2024-06-25T16:26:21.446534466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.446645 containerd[1277]: time="2024-06-25T16:26:21.446629375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.446746 containerd[1277]: time="2024-06-25T16:26:21.446730567Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:26:21.447082 containerd[1277]: time="2024-06-25T16:26:21.447029573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.447220 containerd[1277]: time="2024-06-25T16:26:21.447201428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.447327 containerd[1277]: time="2024-06-25T16:26:21.447310923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.447427 containerd[1277]: time="2024-06-25T16:26:21.447409864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.447554 containerd[1277]: time="2024-06-25T16:26:21.447533277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.447672 containerd[1277]: time="2024-06-25T16:26:21.447654116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.447778 containerd[1277]: time="2024-06-25T16:26:21.447759117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.447887 containerd[1277]: time="2024-06-25T16:26:21.447868470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:26:21.449044 containerd[1277]: time="2024-06-25T16:26:21.448928309Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:26:21.449683 containerd[1277]: time="2024-06-25T16:26:21.449656458Z" level=info msg="Connect containerd service" Jun 25 16:26:21.449873 containerd[1277]: time="2024-06-25T16:26:21.449853838Z" level=info msg="using legacy CRI server" Jun 25 16:26:21.449991 containerd[1277]: time="2024-06-25T16:26:21.449958977Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:26:21.450612 containerd[1277]: time="2024-06-25T16:26:21.450572287Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:26:21.454666 containerd[1277]: time="2024-06-25T16:26:21.454607514Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:26:21.454951 containerd[1277]: time="2024-06-25T16:26:21.454924801Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:26:21.455116 containerd[1277]: time="2024-06-25T16:26:21.455077144Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:26:21.455216 containerd[1277]: time="2024-06-25T16:26:21.455196501Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:26:21.455332 containerd[1277]: time="2024-06-25T16:26:21.455310215Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:26:21.455991 containerd[1277]: time="2024-06-25T16:26:21.455868528Z" level=info msg="Start subscribing containerd event" Jun 25 16:26:21.456379 containerd[1277]: time="2024-06-25T16:26:21.456340813Z" level=info msg="Start recovering state" Jun 25 16:26:21.457755 containerd[1277]: time="2024-06-25T16:26:21.457520275Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:26:21.457983 containerd[1277]: time="2024-06-25T16:26:21.457965068Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:26:21.459470 containerd[1277]: time="2024-06-25T16:26:21.459443658Z" level=info msg="Start event monitor" Jun 25 16:26:21.459639 containerd[1277]: time="2024-06-25T16:26:21.459621335Z" level=info msg="Start snapshots syncer" Jun 25 16:26:21.465776 containerd[1277]: time="2024-06-25T16:26:21.465715683Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:26:21.466095 containerd[1277]: time="2024-06-25T16:26:21.466070015Z" level=info msg="Start streaming server" Jun 25 16:26:21.467291 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:26:21.470165 containerd[1277]: time="2024-06-25T16:26:21.470119582Z" level=info msg="containerd successfully booted in 0.153088s" Jun 25 16:26:21.965008 tar[1273]: linux-amd64/LICENSE Jun 25 16:26:21.965634 tar[1273]: linux-amd64/README.md Jun 25 16:26:21.979871 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:26:22.272772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:22.912440 sshd_keygen[1286]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:26:22.964828 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:26:22.975165 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:26:22.989851 systemd[1]: Started sshd@0-161.35.235.79:22-139.178.89.65:41728.service - OpenSSH per-connection server daemon (139.178.89.65:41728). Jun 25 16:26:22.997158 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:26:22.997495 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:26:23.009998 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:26:23.058466 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:26:23.070031 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:26:23.080396 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:26:23.088114 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:26:23.090759 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:26:23.103310 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:26:23.118328 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:26:23.118702 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:26:23.124657 systemd[1]: Startup finished in 1.939s (kernel) + 6.710s (initrd) + 9.640s (userspace) = 18.290s. Jun 25 16:26:23.168563 sshd[1359]: Accepted publickey for core from 139.178.89.65 port 41728 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:26:23.173370 sshd[1359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:23.193184 systemd-logind[1266]: New session 1 of user core. Jun 25 16:26:23.197384 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:26:23.208411 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:26:23.243330 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:26:23.254020 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:26:23.260566 (systemd)[1369]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:23.498378 systemd[1369]: Queued start job for default target default.target. Jun 25 16:26:23.505176 systemd[1369]: Reached target paths.target - Paths. Jun 25 16:26:23.505229 systemd[1369]: Reached target sockets.target - Sockets. Jun 25 16:26:23.505258 systemd[1369]: Reached target timers.target - Timers. Jun 25 16:26:23.505284 systemd[1369]: Reached target basic.target - Basic System. Jun 25 16:26:23.505372 systemd[1369]: Reached target default.target - Main User Target. Jun 25 16:26:23.505418 systemd[1369]: Startup finished in 230ms. Jun 25 16:26:23.505514 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:26:23.508940 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:26:23.547021 kubelet[1345]: E0625 16:26:23.546807 1345 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:26:23.552499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:26:23.552744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:26:23.553253 systemd[1]: kubelet.service: Consumed 1.396s CPU time. Jun 25 16:26:23.598322 systemd[1]: Started sshd@1-161.35.235.79:22-139.178.89.65:41732.service - OpenSSH per-connection server daemon (139.178.89.65:41732). Jun 25 16:26:23.661568 sshd[1379]: Accepted publickey for core from 139.178.89.65 port 41732 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:26:23.663694 sshd[1379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:23.672548 systemd-logind[1266]: New session 2 of user core. Jun 25 16:26:23.683489 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:26:23.762540 sshd[1379]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:23.776305 systemd[1]: sshd@1-161.35.235.79:22-139.178.89.65:41732.service: Deactivated successfully. Jun 25 16:26:23.777867 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:26:23.779034 systemd-logind[1266]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:26:23.785464 systemd[1]: Started sshd@2-161.35.235.79:22-139.178.89.65:41736.service - OpenSSH per-connection server daemon (139.178.89.65:41736). Jun 25 16:26:23.788568 systemd-logind[1266]: Removed session 2. Jun 25 16:26:23.845049 sshd[1385]: Accepted publickey for core from 139.178.89.65 port 41736 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:26:23.848598 sshd[1385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:23.857980 systemd-logind[1266]: New session 3 of user core. Jun 25 16:26:23.865478 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:26:23.933400 sshd[1385]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:23.948880 systemd[1]: sshd@2-161.35.235.79:22-139.178.89.65:41736.service: Deactivated successfully. Jun 25 16:26:23.950234 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:26:23.952253 systemd-logind[1266]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:26:23.962020 systemd[1]: Started sshd@3-161.35.235.79:22-139.178.89.65:41742.service - OpenSSH per-connection server daemon (139.178.89.65:41742). Jun 25 16:26:23.963679 systemd-logind[1266]: Removed session 3. Jun 25 16:26:24.016129 sshd[1391]: Accepted publickey for core from 139.178.89.65 port 41742 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:26:24.020085 sshd[1391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:24.029005 systemd-logind[1266]: New session 4 of user core. Jun 25 16:26:24.042779 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:26:24.119491 sshd[1391]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:24.135679 systemd[1]: sshd@3-161.35.235.79:22-139.178.89.65:41742.service: Deactivated successfully. Jun 25 16:26:24.138078 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:26:24.139338 systemd-logind[1266]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:26:24.149838 systemd[1]: Started sshd@4-161.35.235.79:22-139.178.89.65:41754.service - OpenSSH per-connection server daemon (139.178.89.65:41754). Jun 25 16:26:24.153316 systemd-logind[1266]: Removed session 4. Jun 25 16:26:24.203418 sshd[1398]: Accepted publickey for core from 139.178.89.65 port 41754 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:26:24.205883 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:24.215154 systemd-logind[1266]: New session 5 of user core. Jun 25 16:26:24.222686 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:26:24.445185 sudo[1401]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:26:24.446675 sudo[1401]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:26:24.468416 sudo[1401]: pam_unix(sudo:session): session closed for user root Jun 25 16:26:24.473977 sshd[1398]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:24.485821 systemd[1]: sshd@4-161.35.235.79:22-139.178.89.65:41754.service: Deactivated successfully. Jun 25 16:26:24.488580 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:26:24.490377 systemd-logind[1266]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:26:24.503021 systemd[1]: Started sshd@5-161.35.235.79:22-139.178.89.65:41766.service - OpenSSH per-connection server daemon (139.178.89.65:41766). Jun 25 16:26:24.506409 systemd-logind[1266]: Removed session 5. Jun 25 16:26:24.557199 sshd[1405]: Accepted publickey for core from 139.178.89.65 port 41766 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:26:24.560390 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:24.571607 systemd-logind[1266]: New session 6 of user core. Jun 25 16:26:24.577494 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:26:24.654227 sudo[1409]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:26:24.655453 sudo[1409]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:26:24.662959 sudo[1409]: pam_unix(sudo:session): session closed for user root Jun 25 16:26:24.672888 sudo[1408]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:26:24.673489 sudo[1408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:26:24.703741 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:26:24.705000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:26:24.707416 kernel: kauditd_printk_skb: 90 callbacks suppressed Jun 25 16:26:24.707537 kernel: audit: type=1305 audit(1719332784.705:197): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:26:24.705000 audit[1412]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff21b18e40 a2=420 a3=0 items=0 ppid=1 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:24.714670 kernel: audit: type=1300 audit(1719332784.705:197): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff21b18e40 a2=420 a3=0 items=0 ppid=1 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:24.714884 kernel: audit: type=1327 audit(1719332784.705:197): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:26:24.705000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:26:24.717355 auditctl[1412]: No rules Jun 25 16:26:24.719038 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:26:24.719501 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:26:24.723290 kernel: audit: type=1131 audit(1719332784.718:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.736782 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:26:24.791799 augenrules[1429]: No rules Jun 25 16:26:24.793733 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:26:24.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.795566 sudo[1408]: pam_unix(sudo:session): session closed for user root Jun 25 16:26:24.803434 kernel: audit: type=1130 audit(1719332784.793:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.803824 kernel: audit: type=1106 audit(1719332784.794:200): pid=1408 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.806999 kernel: audit: type=1104 audit(1719332784.794:201): pid=1408 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.794000 audit[1408]: USER_END pid=1408 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.794000 audit[1408]: CRED_DISP pid=1408 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.804475 sshd[1405]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:24.810000 audit[1405]: USER_END pid=1405 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.813588 systemd[1]: sshd@5-161.35.235.79:22-139.178.89.65:41766.service: Deactivated successfully. Jun 25 16:26:24.814932 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:26:24.810000 audit[1405]: CRED_DISP pid=1405 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.823728 kernel: audit: type=1106 audit(1719332784.810:202): pid=1405 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.823868 kernel: audit: type=1104 audit(1719332784.810:203): pid=1405 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.823705 systemd-logind[1266]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:26:24.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-161.35.235.79:22-139.178.89.65:41766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.829146 kernel: audit: type=1131 audit(1719332784.812:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-161.35.235.79:22-139.178.89.65:41766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.833462 systemd[1]: Started sshd@6-161.35.235.79:22-139.178.89.65:41770.service - OpenSSH per-connection server daemon (139.178.89.65:41770). Jun 25 16:26:24.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-161.35.235.79:22-139.178.89.65:41770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.836692 systemd-logind[1266]: Removed session 6. Jun 25 16:26:24.889000 audit[1435]: USER_ACCT pid=1435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.890984 sshd[1435]: Accepted publickey for core from 139.178.89.65 port 41770 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:26:24.891000 audit[1435]: CRED_ACQ pid=1435 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.892000 audit[1435]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea7ec7fe0 a2=3 a3=7faffc15f480 items=0 ppid=1 pid=1435 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:24.892000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:24.894284 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:24.909264 systemd-logind[1266]: New session 7 of user core. Jun 25 16:26:24.911407 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:26:24.920000 audit[1435]: USER_START pid=1435 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.923000 audit[1437]: CRED_ACQ pid=1437 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.981000 audit[1438]: USER_ACCT pid=1438 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.984532 sudo[1438]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:26:24.983000 audit[1438]: CRED_REFR pid=1438 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.985287 sudo[1438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:26:24.988000 audit[1438]: USER_START pid=1438 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:26:25.190940 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:26:25.812273 dockerd[1447]: time="2024-06-25T16:26:25.812191290Z" level=info msg="Starting up" Jun 25 16:26:25.943785 systemd[1]: var-lib-docker-metacopy\x2dcheck2953324748-merged.mount: Deactivated successfully. Jun 25 16:26:25.963680 dockerd[1447]: time="2024-06-25T16:26:25.963589438Z" level=info msg="Loading containers: start." Jun 25 16:26:26.062000 audit[1480]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.062000 audit[1480]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff5ed24680 a2=0 a3=7fc9a3958e90 items=0 ppid=1447 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.062000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:26:26.067000 audit[1482]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.067000 audit[1482]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffb010da60 a2=0 a3=7f9e2c1ece90 items=0 ppid=1447 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.067000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:26:26.070000 audit[1484]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.070000 audit[1484]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc2544fd30 a2=0 a3=7f32256f6e90 items=0 ppid=1447 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.070000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:26:26.073000 audit[1486]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.073000 audit[1486]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffebb8e38b0 a2=0 a3=7f5fb0c5de90 items=0 ppid=1447 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.073000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:26:26.079000 audit[1488]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.079000 audit[1488]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffbf2306f0 a2=0 a3=7f188936de90 items=0 ppid=1447 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.079000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:26:26.083000 audit[1490]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.083000 audit[1490]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe6812d510 a2=0 a3=7f86e94d5e90 items=0 ppid=1447 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.083000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:26:26.109000 audit[1492]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.109000 audit[1492]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda7113680 a2=0 a3=7feccc02ae90 items=0 ppid=1447 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.109000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:26:26.113000 audit[1494]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.113000 audit[1494]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe40691930 a2=0 a3=7fde1e35ae90 items=0 ppid=1447 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.113000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:26:26.117000 audit[1496]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.117000 audit[1496]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffff6539c30 a2=0 a3=7f1fec9a8e90 items=0 ppid=1447 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.117000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:26.138000 audit[1500]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.138000 audit[1500]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcf668ce60 a2=0 a3=7efe1dbaae90 items=0 ppid=1447 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.138000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:26.140000 audit[1501]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.140000 audit[1501]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdb39928b0 a2=0 a3=7fe00fbb3e90 items=0 ppid=1447 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.140000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:26.155082 kernel: Initializing XFRM netlink socket Jun 25 16:26:26.242000 audit[1509]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.242000 audit[1509]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffdfe2a34c0 a2=0 a3=7f13d5b12e90 items=0 ppid=1447 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.242000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:26:26.257000 audit[1512]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.257000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdd21f8660 a2=0 a3=7fadb5513e90 items=0 ppid=1447 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.257000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:26:26.266000 audit[1516]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.266000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd8a193210 a2=0 a3=7f577d6fbe90 items=0 ppid=1447 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.266000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:26:26.270000 audit[1518]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.270000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc68a8ec80 a2=0 a3=7f9fdf557e90 items=0 ppid=1447 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.270000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:26:26.274000 audit[1520]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.274000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe70b77970 a2=0 a3=7fd8c02b9e90 items=0 ppid=1447 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.274000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:26:26.280000 audit[1522]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.280000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffcfea49410 a2=0 a3=7f46b0a11e90 items=0 ppid=1447 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.280000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:26:26.284000 audit[1524]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.284000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffcb1284c40 a2=0 a3=7fe44e0dae90 items=0 ppid=1447 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.284000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:26:26.301000 audit[1527]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.301000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd7234f900 a2=0 a3=7fad6c84ee90 items=0 ppid=1447 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.301000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:26:26.307000 audit[1529]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.307000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd082ccc70 a2=0 a3=7fd303053e90 items=0 ppid=1447 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.307000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:26:26.311000 audit[1531]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.311000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcde22cd10 a2=0 a3=7fb1a79a8e90 items=0 ppid=1447 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.311000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:26:26.315000 audit[1533]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.315000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc6b07db50 a2=0 a3=7f46239b9e90 items=0 ppid=1447 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.315000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:26:26.321214 systemd-networkd[1091]: docker0: Link UP Jun 25 16:26:26.436000 audit[1537]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.436000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe1c518030 a2=0 a3=7fe074526e90 items=0 ppid=1447 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.436000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:26.437000 audit[1538]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:26.437000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff2d869f50 a2=0 a3=7ffb2c3c9e90 items=0 ppid=1447 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:26.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:26:26.438576 dockerd[1447]: time="2024-06-25T16:26:26.438487560Z" level=info msg="Loading containers: done." Jun 25 16:26:26.539871 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2952791792-merged.mount: Deactivated successfully. Jun 25 16:26:26.553517 dockerd[1447]: time="2024-06-25T16:26:26.553440726Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:26:26.554055 dockerd[1447]: time="2024-06-25T16:26:26.554017281Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:26:26.554255 dockerd[1447]: time="2024-06-25T16:26:26.554225991Z" level=info msg="Daemon has completed initialization" Jun 25 16:26:26.614777 dockerd[1447]: time="2024-06-25T16:26:26.614533978Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:26:26.616426 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:26:26.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:27.947244 containerd[1277]: time="2024-06-25T16:26:27.947167049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 16:26:29.138734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055402201.mount: Deactivated successfully. Jun 25 16:26:31.295188 containerd[1277]: time="2024-06-25T16:26:31.293089988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:31.297496 containerd[1277]: time="2024-06-25T16:26:31.297401368Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jun 25 16:26:31.299733 containerd[1277]: time="2024-06-25T16:26:31.299646403Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:31.305548 containerd[1277]: time="2024-06-25T16:26:31.305443930Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:31.313207 containerd[1277]: time="2024-06-25T16:26:31.313129914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:31.316981 containerd[1277]: time="2024-06-25T16:26:31.316900380Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 3.369647424s" Jun 25 16:26:31.317312 containerd[1277]: time="2024-06-25T16:26:31.317271426Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jun 25 16:26:31.371649 containerd[1277]: time="2024-06-25T16:26:31.371585560Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 16:26:33.731607 containerd[1277]: time="2024-06-25T16:26:33.731536306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:33.733888 containerd[1277]: time="2024-06-25T16:26:33.733789308Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jun 25 16:26:33.736588 containerd[1277]: time="2024-06-25T16:26:33.736532301Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:33.741225 containerd[1277]: time="2024-06-25T16:26:33.741157534Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:33.747274 containerd[1277]: time="2024-06-25T16:26:33.747212297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:33.748691 containerd[1277]: time="2024-06-25T16:26:33.748605768Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.376941978s" Jun 25 16:26:33.748691 containerd[1277]: time="2024-06-25T16:26:33.748687778Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jun 25 16:26:33.790939 containerd[1277]: time="2024-06-25T16:26:33.790863370Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 16:26:33.804649 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:26:33.809637 kernel: kauditd_printk_skb: 84 callbacks suppressed Jun 25 16:26:33.809776 kernel: audit: type=1130 audit(1719332793.804:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:33.809843 kernel: audit: type=1131 audit(1719332793.804:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:33.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:33.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:33.805141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:33.805247 systemd[1]: kubelet.service: Consumed 1.396s CPU time. Jun 25 16:26:33.816784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:34.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.039169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:34.045108 kernel: audit: type=1130 audit(1719332794.038:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.169634 kubelet[1658]: E0625 16:26:34.169551 1658 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:26:34.178022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:26:34.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:34.178790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:26:34.194242 kernel: audit: type=1131 audit(1719332794.178:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:35.710950 containerd[1277]: time="2024-06-25T16:26:35.710876998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:35.714276 containerd[1277]: time="2024-06-25T16:26:35.714190894Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jun 25 16:26:35.717351 containerd[1277]: time="2024-06-25T16:26:35.717281185Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:35.725808 containerd[1277]: time="2024-06-25T16:26:35.725699961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:35.734128 containerd[1277]: time="2024-06-25T16:26:35.734021446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:35.736530 containerd[1277]: time="2024-06-25T16:26:35.736443906Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.94491929s" Jun 25 16:26:35.736530 containerd[1277]: time="2024-06-25T16:26:35.736523047Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jun 25 16:26:35.783734 containerd[1277]: time="2024-06-25T16:26:35.783662741Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 16:26:37.221183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087727948.mount: Deactivated successfully. Jun 25 16:26:37.919027 containerd[1277]: time="2024-06-25T16:26:37.918933315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:37.921854 containerd[1277]: time="2024-06-25T16:26:37.921526900Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jun 25 16:26:37.923204 containerd[1277]: time="2024-06-25T16:26:37.923103728Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:37.926185 containerd[1277]: time="2024-06-25T16:26:37.926134903Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:37.929782 containerd[1277]: time="2024-06-25T16:26:37.929725384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:37.932503 containerd[1277]: time="2024-06-25T16:26:37.932432889Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 2.14831363s" Jun 25 16:26:37.933235 containerd[1277]: time="2024-06-25T16:26:37.932506761Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jun 25 16:26:37.987356 containerd[1277]: time="2024-06-25T16:26:37.987269218Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 16:26:38.591859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745543919.mount: Deactivated successfully. Jun 25 16:26:39.754170 containerd[1277]: time="2024-06-25T16:26:39.754083826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:39.756588 containerd[1277]: time="2024-06-25T16:26:39.756487099Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 16:26:39.757839 containerd[1277]: time="2024-06-25T16:26:39.757788563Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:39.764279 containerd[1277]: time="2024-06-25T16:26:39.764219047Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:39.768557 containerd[1277]: time="2024-06-25T16:26:39.768480807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:39.771816 containerd[1277]: time="2024-06-25T16:26:39.771754194Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.784408716s" Jun 25 16:26:39.772074 containerd[1277]: time="2024-06-25T16:26:39.772009448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 16:26:39.819521 containerd[1277]: time="2024-06-25T16:26:39.819463270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:26:40.476996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232715266.mount: Deactivated successfully. Jun 25 16:26:40.488370 containerd[1277]: time="2024-06-25T16:26:40.488296731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:40.490061 containerd[1277]: time="2024-06-25T16:26:40.489983986Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:26:40.491366 containerd[1277]: time="2024-06-25T16:26:40.491317169Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:40.494823 containerd[1277]: time="2024-06-25T16:26:40.494779196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:40.497958 containerd[1277]: time="2024-06-25T16:26:40.497899599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:40.499592 containerd[1277]: time="2024-06-25T16:26:40.499536842Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 679.793548ms" Jun 25 16:26:40.499867 containerd[1277]: time="2024-06-25T16:26:40.499590860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:26:40.538872 containerd[1277]: time="2024-06-25T16:26:40.538774683Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 16:26:41.430741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3603433929.mount: Deactivated successfully. Jun 25 16:26:44.143444 containerd[1277]: time="2024-06-25T16:26:44.143346598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:44.146859 containerd[1277]: time="2024-06-25T16:26:44.146771782Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jun 25 16:26:44.147342 containerd[1277]: time="2024-06-25T16:26:44.147305324Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:44.152152 containerd[1277]: time="2024-06-25T16:26:44.152096429Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:44.163375 containerd[1277]: time="2024-06-25T16:26:44.163303247Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.624449819s" Jun 25 16:26:44.163632 containerd[1277]: time="2024-06-25T16:26:44.163600518Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jun 25 16:26:44.167493 containerd[1277]: time="2024-06-25T16:26:44.167168936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:44.230673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:26:44.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:44.231007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:44.234849 kernel: audit: type=1130 audit(1719332804.230:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:44.234986 kernel: audit: type=1131 audit(1719332804.230:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:44.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:44.242699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:44.421359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:44.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:44.446091 kernel: audit: type=1130 audit(1719332804.420:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:44.554047 kubelet[1802]: E0625 16:26:44.553976 1802 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:26:44.557710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:26:44.557951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:26:44.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:44.564858 kernel: audit: type=1131 audit(1719332804.558:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:26:47.484967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:47.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:47.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:47.491298 kernel: audit: type=1130 audit(1719332807.484:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:47.491436 kernel: audit: type=1131 audit(1719332807.484:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:47.494029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:47.538718 systemd[1]: Reloading. Jun 25 16:26:47.911133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:26:48.069000 audit: BPF prog-id=38 op=LOAD Jun 25 16:26:48.071000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:26:48.072948 kernel: audit: type=1334 audit(1719332808.069:249): prog-id=38 op=LOAD Jun 25 16:26:48.073074 kernel: audit: type=1334 audit(1719332808.071:250): prog-id=35 op=UNLOAD Jun 25 16:26:48.073131 kernel: audit: type=1334 audit(1719332808.071:251): prog-id=39 op=LOAD Jun 25 16:26:48.071000 audit: BPF prog-id=39 op=LOAD Jun 25 16:26:48.071000 audit: BPF prog-id=40 op=LOAD Jun 25 16:26:48.074702 kernel: audit: type=1334 audit(1719332808.071:252): prog-id=40 op=LOAD Jun 25 16:26:48.071000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:26:48.071000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:26:48.071000 audit: BPF prog-id=41 op=LOAD Jun 25 16:26:48.071000 audit: BPF prog-id=42 op=LOAD Jun 25 16:26:48.071000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:26:48.071000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:26:48.073000 audit: BPF prog-id=43 op=LOAD Jun 25 16:26:48.073000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:26:48.076000 audit: BPF prog-id=44 op=LOAD Jun 25 16:26:48.076000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:26:48.082000 audit: BPF prog-id=45 op=LOAD Jun 25 16:26:48.082000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:26:48.082000 audit: BPF prog-id=46 op=LOAD Jun 25 16:26:48.083000 audit: BPF prog-id=47 op=LOAD Jun 25 16:26:48.083000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:26:48.083000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:26:48.084000 audit: BPF prog-id=48 op=LOAD Jun 25 16:26:48.084000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:26:48.084000 audit: BPF prog-id=49 op=LOAD Jun 25 16:26:48.084000 audit: BPF prog-id=50 op=LOAD Jun 25 16:26:48.084000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:26:48.084000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:26:48.086000 audit: BPF prog-id=51 op=LOAD Jun 25 16:26:48.086000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:26:48.117079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:48.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:48.131519 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:48.133094 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:26:48.133530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:48.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:48.141706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:48.324077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:48.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:48.402325 kubelet[1934]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:26:48.402812 kubelet[1934]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:26:48.402892 kubelet[1934]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:26:48.403108 kubelet[1934]: I0625 16:26:48.403075 1934 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:26:48.728174 kubelet[1934]: I0625 16:26:48.727969 1934 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 16:26:48.728174 kubelet[1934]: I0625 16:26:48.728016 1934 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:26:48.729118 kubelet[1934]: I0625 16:26:48.729086 1934 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 16:26:48.750383 kubelet[1934]: I0625 16:26:48.750318 1934 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:26:48.752593 kubelet[1934]: E0625 16:26:48.752550 1934 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://161.35.235.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:48.778965 kubelet[1934]: I0625 16:26:48.778915 1934 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:26:48.781164 kubelet[1934]: I0625 16:26:48.781055 1934 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:26:48.781493 kubelet[1934]: I0625 16:26:48.781154 1934 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3815.2.4-0-d0607f9d2c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:26:48.782409 kubelet[1934]: I0625 16:26:48.782358 1934 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:26:48.782409 kubelet[1934]: I0625 16:26:48.782410 1934 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:26:48.783829 kubelet[1934]: I0625 16:26:48.783772 1934 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:26:48.784981 kubelet[1934]: I0625 16:26:48.784944 1934 kubelet.go:400] "Attempting to sync node with API server" Jun 25 16:26:48.784981 kubelet[1934]: I0625 16:26:48.784983 1934 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:26:48.785192 kubelet[1934]: I0625 16:26:48.785021 1934 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:26:48.785192 kubelet[1934]: I0625 16:26:48.785066 1934 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:26:48.791671 kubelet[1934]: W0625 16:26:48.790791 1934 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://161.35.235.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-0-d0607f9d2c&limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:48.791671 kubelet[1934]: E0625 16:26:48.790913 1934 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://161.35.235.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-0-d0607f9d2c&limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:48.791671 kubelet[1934]: I0625 16:26:48.791605 1934 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:26:48.794263 kubelet[1934]: I0625 16:26:48.793559 1934 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:26:48.794263 kubelet[1934]: W0625 16:26:48.793690 1934 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:26:48.794857 kubelet[1934]: I0625 16:26:48.794826 1934 server.go:1264] "Started kubelet" Jun 25 16:26:48.795124 kubelet[1934]: W0625 16:26:48.795058 1934 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://161.35.235.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:48.795213 kubelet[1934]: E0625 16:26:48.795142 1934 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://161.35.235.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:48.799673 kubelet[1934]: I0625 16:26:48.799622 1934 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:26:48.801083 kubelet[1934]: I0625 16:26:48.801057 1934 server.go:455] "Adding debug handlers to kubelet server" Jun 25 16:26:48.803234 kubelet[1934]: I0625 16:26:48.802910 1934 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:26:48.803419 kubelet[1934]: I0625 16:26:48.803337 1934 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:26:48.804533 kubelet[1934]: E0625 16:26:48.804281 1934 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://161.35.235.79:6443/api/v1/namespaces/default/events\": dial tcp 161.35.235.79:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3815.2.4-0-d0607f9d2c.17dc4c1486e34bea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.4-0-d0607f9d2c,UID:ci-3815.2.4-0-d0607f9d2c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.4-0-d0607f9d2c,},FirstTimestamp:2024-06-25 16:26:48.794786794 +0000 UTC m=+0.461450645,LastTimestamp:2024-06-25 16:26:48.794786794 +0000 UTC m=+0.461450645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.4-0-d0607f9d2c,}" Jun 25 16:26:48.805741 kubelet[1934]: I0625 16:26:48.805716 1934 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:26:48.810486 kubelet[1934]: E0625 16:26:48.810438 1934 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:26:48.811000 audit[1945]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:48.811000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd04392d30 a2=0 a3=7f27968a0e90 items=0 ppid=1934 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:26:48.814000 audit[1946]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:48.814000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd6ee7f30 a2=0 a3=7f335abb1e90 items=0 ppid=1934 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:26:48.816023 kubelet[1934]: E0625 16:26:48.815968 1934 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.4-0-d0607f9d2c\" not found" Jun 25 16:26:48.816132 kubelet[1934]: I0625 16:26:48.816072 1934 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:26:48.816248 kubelet[1934]: I0625 16:26:48.816225 1934 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 16:26:48.816354 kubelet[1934]: I0625 16:26:48.816340 1934 reconciler.go:26] "Reconciler: start to sync state" Jun 25 16:26:48.816994 kubelet[1934]: W0625 16:26:48.816885 1934 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://161.35.235.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:48.816994 kubelet[1934]: E0625 16:26:48.816977 1934 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://161.35.235.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:48.818495 kubelet[1934]: E0625 16:26:48.818416 1934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.235.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-0-d0607f9d2c?timeout=10s\": dial tcp 161.35.235.79:6443: connect: connection refused" interval="200ms" Jun 25 16:26:48.821668 kubelet[1934]: I0625 16:26:48.821627 1934 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:26:48.821668 kubelet[1934]: I0625 16:26:48.821657 1934 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:26:48.821902 kubelet[1934]: I0625 16:26:48.821756 1934 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:26:48.822000 audit[1948]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:48.822000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe14de3dd0 a2=0 a3=7f188901ae90 items=0 ppid=1934 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:26:48.828000 audit[1950]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:48.828000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe8d3dfbe0 a2=0 a3=7fc5ecb35e90 items=0 ppid=1934 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:26:48.860000 audit[1956]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:48.860000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd52918af0 a2=0 a3=7fac27a0ae90 items=0 ppid=1934 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:26:48.863994 kubelet[1934]: I0625 16:26:48.863945 1934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:26:48.864000 audit[1957]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:48.864000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffed0027d0 a2=0 a3=7fa38f66de90 items=0 ppid=1934 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:26:48.867569 kubelet[1934]: I0625 16:26:48.867538 1934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:26:48.868171 kubelet[1934]: I0625 16:26:48.868152 1934 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:26:48.868345 kubelet[1934]: I0625 16:26:48.868331 1934 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 16:26:48.868492 kubelet[1934]: E0625 16:26:48.868471 1934 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:26:48.868000 audit[1958]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:48.868000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe409a8430 a2=0 a3=7fe458b2ee90 items=0 ppid=1934 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:26:48.871667 kubelet[1934]: W0625 16:26:48.871607 1934 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://161.35.235.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:48.871847 kubelet[1934]: E0625 16:26:48.871834 1934 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://161.35.235.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:48.872281 kubelet[1934]: I0625 16:26:48.872236 1934 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:26:48.872281 kubelet[1934]: I0625 16:26:48.872277 1934 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:26:48.872435 kubelet[1934]: I0625 16:26:48.872305 1934 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:26:48.872000 audit[1959]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:48.872000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5ad50740 a2=0 a3=7f0a6c4ede90 items=0 ppid=1934 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:26:48.875935 kubelet[1934]: I0625 16:26:48.875896 1934 policy_none.go:49] "None policy: Start" Jun 25 16:26:48.875000 audit[1961]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:48.875000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd37a5f890 a2=0 a3=7fb75c675e90 items=0 ppid=1934 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.875000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:26:48.877338 kubelet[1934]: I0625 16:26:48.877315 1934 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:26:48.877452 kubelet[1934]: I0625 16:26:48.877442 1934 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:26:48.877000 audit[1962]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:48.877000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffdcb13a6d0 a2=0 a3=7f0eb1a28e90 items=0 ppid=1934 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:26:48.880000 audit[1964]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:26:48.880000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd99626240 a2=0 a3=7fa100b1ee90 items=0 ppid=1934 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:26:48.880000 audit[1963]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:26:48.880000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec1193590 a2=0 a3=7f7dfd783e90 items=0 ppid=1934 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:48.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:26:48.893034 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:26:48.911979 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:26:48.919809 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:26:48.921775 kubelet[1934]: I0625 16:26:48.921700 1934 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:48.923007 kubelet[1934]: E0625 16:26:48.922565 1934 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://161.35.235.79:6443/api/v1/nodes\": dial tcp 161.35.235.79:6443: connect: connection refused" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:48.931868 kubelet[1934]: I0625 16:26:48.931789 1934 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:26:48.935303 kubelet[1934]: I0625 16:26:48.935219 1934 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 16:26:48.935726 kubelet[1934]: I0625 16:26:48.935706 1934 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:26:48.936808 kubelet[1934]: E0625 16:26:48.936731 1934 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.4-0-d0607f9d2c\" not found" Jun 25 16:26:48.971796 kubelet[1934]: I0625 16:26:48.971707 1934 topology_manager.go:215] "Topology Admit Handler" podUID="d69b9d614162bb448252d41244e461f6" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:48.973557 kubelet[1934]: I0625 16:26:48.973517 1934 topology_manager.go:215] "Topology Admit Handler" podUID="39e1cbb400f50359088988ce9245826a" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:48.975402 kubelet[1934]: I0625 16:26:48.975364 1934 topology_manager.go:215] "Topology Admit Handler" podUID="d3de23907c7c171f51c783fbbeb46e9f" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:48.987538 systemd[1]: Created slice kubepods-burstable-podd69b9d614162bb448252d41244e461f6.slice - libcontainer container kubepods-burstable-podd69b9d614162bb448252d41244e461f6.slice. Jun 25 16:26:49.004821 systemd[1]: Created slice kubepods-burstable-pod39e1cbb400f50359088988ce9245826a.slice - libcontainer container kubepods-burstable-pod39e1cbb400f50359088988ce9245826a.slice. Jun 25 16:26:49.017395 kubelet[1934]: I0625 16:26:49.017354 1934 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.017580 kubelet[1934]: I0625 16:26:49.017408 1934 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.017580 kubelet[1934]: I0625 16:26:49.017441 1934 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.017580 kubelet[1934]: I0625 16:26:49.017474 1934 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.017580 kubelet[1934]: I0625 16:26:49.017502 1934 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d69b9d614162bb448252d41244e461f6-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-0-d0607f9d2c\" (UID: \"d69b9d614162bb448252d41244e461f6\") " pod="kube-system/kube-apiserver-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.017580 kubelet[1934]: I0625 16:26:49.017530 1934 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d69b9d614162bb448252d41244e461f6-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-0-d0607f9d2c\" (UID: \"d69b9d614162bb448252d41244e461f6\") " pod="kube-system/kube-apiserver-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.017849 kubelet[1934]: I0625 16:26:49.017556 1934 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d69b9d614162bb448252d41244e461f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-0-d0607f9d2c\" (UID: \"d69b9d614162bb448252d41244e461f6\") " pod="kube-system/kube-apiserver-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.019476 kubelet[1934]: E0625 16:26:49.019427 1934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.235.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-0-d0607f9d2c?timeout=10s\": dial tcp 161.35.235.79:6443: connect: connection refused" interval="400ms" Jun 25 16:26:49.023072 systemd[1]: Created slice kubepods-burstable-podd3de23907c7c171f51c783fbbeb46e9f.slice - libcontainer container kubepods-burstable-podd3de23907c7c171f51c783fbbeb46e9f.slice. Jun 25 16:26:49.118264 kubelet[1934]: I0625 16:26:49.118197 1934 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.118264 kubelet[1934]: I0625 16:26:49.118259 1934 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3de23907c7c171f51c783fbbeb46e9f-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-0-d0607f9d2c\" (UID: \"d3de23907c7c171f51c783fbbeb46e9f\") " pod="kube-system/kube-scheduler-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.124835 kubelet[1934]: I0625 16:26:49.124785 1934 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.125278 kubelet[1934]: E0625 16:26:49.125244 1934 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://161.35.235.79:6443/api/v1/nodes\": dial tcp 161.35.235.79:6443: connect: connection refused" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.302269 kubelet[1934]: E0625 16:26:49.302109 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:49.305120 containerd[1277]: time="2024-06-25T16:26:49.304897678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-0-d0607f9d2c,Uid:d69b9d614162bb448252d41244e461f6,Namespace:kube-system,Attempt:0,}" Jun 25 16:26:49.317278 kubelet[1934]: E0625 16:26:49.317227 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:49.329520 containerd[1277]: time="2024-06-25T16:26:49.329441286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-0-d0607f9d2c,Uid:39e1cbb400f50359088988ce9245826a,Namespace:kube-system,Attempt:0,}" Jun 25 16:26:49.333355 kubelet[1934]: E0625 16:26:49.333308 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:49.334516 containerd[1277]: time="2024-06-25T16:26:49.334447805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-0-d0607f9d2c,Uid:d3de23907c7c171f51c783fbbeb46e9f,Namespace:kube-system,Attempt:0,}" Jun 25 16:26:49.420338 kubelet[1934]: E0625 16:26:49.420270 1934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.235.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-0-d0607f9d2c?timeout=10s\": dial tcp 161.35.235.79:6443: connect: connection refused" interval="800ms" Jun 25 16:26:49.530100 kubelet[1934]: I0625 16:26:49.529616 1934 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.530946 kubelet[1934]: E0625 16:26:49.530860 1934 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://161.35.235.79:6443/api/v1/nodes\": dial tcp 161.35.235.79:6443: connect: connection refused" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:49.868287 kubelet[1934]: W0625 16:26:49.868173 1934 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://161.35.235.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:49.868287 kubelet[1934]: E0625 16:26:49.868257 1934 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://161.35.235.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:50.065440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375350584.mount: Deactivated successfully. Jun 25 16:26:50.077673 containerd[1277]: time="2024-06-25T16:26:50.077533312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.081702 kubelet[1934]: W0625 16:26:50.081593 1934 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://161.35.235.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:50.081702 kubelet[1934]: E0625 16:26:50.081672 1934 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://161.35.235.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:50.082533 containerd[1277]: time="2024-06-25T16:26:50.082425610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:26:50.083832 containerd[1277]: time="2024-06-25T16:26:50.083764592Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.086573 containerd[1277]: time="2024-06-25T16:26:50.086513261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.086932 containerd[1277]: time="2024-06-25T16:26:50.086721725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:26:50.089533 containerd[1277]: time="2024-06-25T16:26:50.089478475Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.091160 containerd[1277]: time="2024-06-25T16:26:50.089923727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:26:50.092190 containerd[1277]: time="2024-06-25T16:26:50.092145208Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.096481 containerd[1277]: time="2024-06-25T16:26:50.096424733Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.101305 containerd[1277]: time="2024-06-25T16:26:50.101250265Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.105491 containerd[1277]: time="2024-06-25T16:26:50.105419788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 770.805268ms" Jun 25 16:26:50.106677 containerd[1277]: time="2024-06-25T16:26:50.106631349Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.109791 containerd[1277]: time="2024-06-25T16:26:50.109711030Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 779.754755ms" Jun 25 16:26:50.110542 containerd[1277]: time="2024-06-25T16:26:50.110489927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.111793 containerd[1277]: time="2024-06-25T16:26:50.111742032Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.113057 containerd[1277]: time="2024-06-25T16:26:50.113004180Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.120134 containerd[1277]: time="2024-06-25T16:26:50.118329904Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:26:50.123004 containerd[1277]: time="2024-06-25T16:26:50.119712566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 814.64946ms" Jun 25 16:26:50.222179 kubelet[1934]: E0625 16:26:50.222103 1934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.235.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-0-d0607f9d2c?timeout=10s\": dial tcp 161.35.235.79:6443: connect: connection refused" interval="1.6s" Jun 25 16:26:50.316751 kubelet[1934]: W0625 16:26:50.316673 1934 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://161.35.235.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-0-d0607f9d2c&limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:50.316751 kubelet[1934]: E0625 16:26:50.316754 1934 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://161.35.235.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-0-d0607f9d2c&limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:50.332686 kubelet[1934]: I0625 16:26:50.332633 1934 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:50.333173 kubelet[1934]: E0625 16:26:50.333136 1934 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://161.35.235.79:6443/api/v1/nodes\": dial tcp 161.35.235.79:6443: connect: connection refused" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:50.337171 containerd[1277]: time="2024-06-25T16:26:50.336954630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:50.337760 containerd[1277]: time="2024-06-25T16:26:50.337189427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:50.337760 containerd[1277]: time="2024-06-25T16:26:50.337255325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:50.337760 containerd[1277]: time="2024-06-25T16:26:50.337300678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:50.342773 containerd[1277]: time="2024-06-25T16:26:50.342532247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:50.343126 containerd[1277]: time="2024-06-25T16:26:50.342957119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:50.343126 containerd[1277]: time="2024-06-25T16:26:50.343009186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:50.343331 containerd[1277]: time="2024-06-25T16:26:50.343243818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:50.355606 containerd[1277]: time="2024-06-25T16:26:50.355400879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:50.355939 containerd[1277]: time="2024-06-25T16:26:50.355627636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:50.355939 containerd[1277]: time="2024-06-25T16:26:50.355675816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:50.355939 containerd[1277]: time="2024-06-25T16:26:50.355722817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:50.386361 systemd[1]: Started cri-containerd-019339c33a2d0b4c52421deca52f696a9e23e425e81b35401d7704954e8f544e.scope - libcontainer container 019339c33a2d0b4c52421deca52f696a9e23e425e81b35401d7704954e8f544e. Jun 25 16:26:50.393207 systemd[1]: Started cri-containerd-6b8f24e11e4a7e10195957579fba21610e782a823c892075d1641ebb910d83d8.scope - libcontainer container 6b8f24e11e4a7e10195957579fba21610e782a823c892075d1641ebb910d83d8. Jun 25 16:26:50.420000 audit: BPF prog-id=52 op=LOAD Jun 25 16:26:50.423945 kernel: kauditd_printk_skb: 63 callbacks suppressed Jun 25 16:26:50.424130 kernel: audit: type=1334 audit(1719332810.420:292): prog-id=52 op=LOAD Jun 25 16:26:50.424189 kernel: audit: type=1334 audit(1719332810.421:293): prog-id=53 op=LOAD Jun 25 16:26:50.421000 audit: BPF prog-id=53 op=LOAD Jun 25 16:26:50.424713 kernel: audit: type=1300 audit(1719332810.421:293): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00015b988 a2=78 a3=0 items=0 ppid=1995 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.421000 audit[2024]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00015b988 a2=78 a3=0 items=0 ppid=1995 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031393333396333336132643062346335323432316465636135326636 Jun 25 16:26:50.436278 kernel: audit: type=1327 audit(1719332810.421:293): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031393333396333336132643062346335323432316465636135326636 Jun 25 16:26:50.421000 audit: BPF prog-id=54 op=LOAD Jun 25 16:26:50.442132 kernel: audit: type=1334 audit(1719332810.421:294): prog-id=54 op=LOAD Jun 25 16:26:50.421000 audit[2024]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00015b720 a2=78 a3=0 items=0 ppid=1995 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.444439 systemd[1]: Started cri-containerd-f5b0fc3151b96370e75ff36a77ca4509ad43d45890a1160a944e758f3eebb76c.scope - libcontainer container f5b0fc3151b96370e75ff36a77ca4509ad43d45890a1160a944e758f3eebb76c. Jun 25 16:26:50.449173 kernel: audit: type=1300 audit(1719332810.421:294): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00015b720 a2=78 a3=0 items=0 ppid=1995 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031393333396333336132643062346335323432316465636135326636 Jun 25 16:26:50.458160 kernel: audit: type=1327 audit(1719332810.421:294): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031393333396333336132643062346335323432316465636135326636 Jun 25 16:26:50.421000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:26:50.464147 kernel: audit: type=1334 audit(1719332810.421:295): prog-id=54 op=UNLOAD Jun 25 16:26:50.421000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:26:50.467147 kernel: audit: type=1334 audit(1719332810.421:296): prog-id=53 op=UNLOAD Jun 25 16:26:50.421000 audit: BPF prog-id=55 op=LOAD Jun 25 16:26:50.472620 kernel: audit: type=1334 audit(1719332810.421:297): prog-id=55 op=LOAD Jun 25 16:26:50.472726 kubelet[1934]: W0625 16:26:50.472543 1934 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://161.35.235.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:50.472726 kubelet[1934]: E0625 16:26:50.472586 1934 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://161.35.235.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:50.421000 audit[2024]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00015bbe0 a2=78 a3=0 items=0 ppid=1995 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031393333396333336132643062346335323432316465636135326636 Jun 25 16:26:50.436000 audit: BPF prog-id=56 op=LOAD Jun 25 16:26:50.437000 audit: BPF prog-id=57 op=LOAD Jun 25 16:26:50.437000 audit[2016]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=1991 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662386632346531316534613765313031393539353735373966626132 Jun 25 16:26:50.437000 audit: BPF prog-id=58 op=LOAD Jun 25 16:26:50.437000 audit[2016]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=1991 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662386632346531316534613765313031393539353735373966626132 Jun 25 16:26:50.437000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:26:50.437000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:26:50.437000 audit: BPF prog-id=59 op=LOAD Jun 25 16:26:50.437000 audit[2016]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=1991 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662386632346531316534613765313031393539353735373966626132 Jun 25 16:26:50.484000 audit: BPF prog-id=60 op=LOAD Jun 25 16:26:50.488000 audit: BPF prog-id=61 op=LOAD Jun 25 16:26:50.488000 audit[2035]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=1996 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635623066633331353162393633373065373566663336613737636134 Jun 25 16:26:50.495000 audit: BPF prog-id=62 op=LOAD Jun 25 16:26:50.495000 audit[2035]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=1996 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635623066633331353162393633373065373566663336613737636134 Jun 25 16:26:50.497000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:26:50.497000 audit: BPF prog-id=61 op=UNLOAD Jun 25 16:26:50.497000 audit: BPF prog-id=63 op=LOAD Jun 25 16:26:50.497000 audit[2035]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=1996 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635623066633331353162393633373065373566663336613737636134 Jun 25 16:26:50.532343 containerd[1277]: time="2024-06-25T16:26:50.532275938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-0-d0607f9d2c,Uid:d69b9d614162bb448252d41244e461f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b8f24e11e4a7e10195957579fba21610e782a823c892075d1641ebb910d83d8\"" Jun 25 16:26:50.539377 kubelet[1934]: E0625 16:26:50.539327 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:50.544844 containerd[1277]: time="2024-06-25T16:26:50.544771835Z" level=info msg="CreateContainer within sandbox \"6b8f24e11e4a7e10195957579fba21610e782a823c892075d1641ebb910d83d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:26:50.568775 containerd[1277]: time="2024-06-25T16:26:50.568682114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-0-d0607f9d2c,Uid:d3de23907c7c171f51c783fbbeb46e9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"019339c33a2d0b4c52421deca52f696a9e23e425e81b35401d7704954e8f544e\"" Jun 25 16:26:50.570543 kubelet[1934]: E0625 16:26:50.570020 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:50.573120 containerd[1277]: time="2024-06-25T16:26:50.573069335Z" level=info msg="CreateContainer within sandbox \"019339c33a2d0b4c52421deca52f696a9e23e425e81b35401d7704954e8f544e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:26:50.593999 containerd[1277]: time="2024-06-25T16:26:50.593932857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-0-d0607f9d2c,Uid:39e1cbb400f50359088988ce9245826a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5b0fc3151b96370e75ff36a77ca4509ad43d45890a1160a944e758f3eebb76c\"" Jun 25 16:26:50.596415 kubelet[1934]: E0625 16:26:50.595809 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:50.599222 containerd[1277]: time="2024-06-25T16:26:50.599172406Z" level=info msg="CreateContainer within sandbox \"f5b0fc3151b96370e75ff36a77ca4509ad43d45890a1160a944e758f3eebb76c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:26:50.601749 containerd[1277]: time="2024-06-25T16:26:50.601664707Z" level=info msg="CreateContainer within sandbox \"6b8f24e11e4a7e10195957579fba21610e782a823c892075d1641ebb910d83d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f911d25078db0ca5fa4550a996bce969c7040d0d8efc8e9f3d0fdd1c732c516\"" Jun 25 16:26:50.602464 containerd[1277]: time="2024-06-25T16:26:50.602412827Z" level=info msg="StartContainer for \"0f911d25078db0ca5fa4550a996bce969c7040d0d8efc8e9f3d0fdd1c732c516\"" Jun 25 16:26:50.605754 containerd[1277]: time="2024-06-25T16:26:50.605699769Z" level=info msg="CreateContainer within sandbox \"019339c33a2d0b4c52421deca52f696a9e23e425e81b35401d7704954e8f544e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1550168677e1c606921b18190f700821c0dce8d9b3a61807c61b54a6385811d\"" Jun 25 16:26:50.607268 containerd[1277]: time="2024-06-25T16:26:50.607223050Z" level=info msg="StartContainer for \"a1550168677e1c606921b18190f700821c0dce8d9b3a61807c61b54a6385811d\"" Jun 25 16:26:50.622714 containerd[1277]: time="2024-06-25T16:26:50.622634888Z" level=info msg="CreateContainer within sandbox \"f5b0fc3151b96370e75ff36a77ca4509ad43d45890a1160a944e758f3eebb76c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d1f150b0b78d261ff175950b405b2ac1186f788c71884061b2f7583d0a7488e5\"" Jun 25 16:26:50.623586 containerd[1277]: time="2024-06-25T16:26:50.623545403Z" level=info msg="StartContainer for \"d1f150b0b78d261ff175950b405b2ac1186f788c71884061b2f7583d0a7488e5\"" Jun 25 16:26:50.659405 systemd[1]: Started cri-containerd-a1550168677e1c606921b18190f700821c0dce8d9b3a61807c61b54a6385811d.scope - libcontainer container a1550168677e1c606921b18190f700821c0dce8d9b3a61807c61b54a6385811d. Jun 25 16:26:50.683366 systemd[1]: Started cri-containerd-0f911d25078db0ca5fa4550a996bce969c7040d0d8efc8e9f3d0fdd1c732c516.scope - libcontainer container 0f911d25078db0ca5fa4550a996bce969c7040d0d8efc8e9f3d0fdd1c732c516. Jun 25 16:26:50.696000 audit: BPF prog-id=64 op=LOAD Jun 25 16:26:50.698000 audit: BPF prog-id=65 op=LOAD Jun 25 16:26:50.698000 audit[2118]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=1995 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353530313638363737653163363036393231623138313930663730 Jun 25 16:26:50.698000 audit: BPF prog-id=66 op=LOAD Jun 25 16:26:50.698000 audit[2118]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=1995 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353530313638363737653163363036393231623138313930663730 Jun 25 16:26:50.698000 audit: BPF prog-id=66 op=UNLOAD Jun 25 16:26:50.698000 audit: BPF prog-id=65 op=UNLOAD Jun 25 16:26:50.698000 audit: BPF prog-id=67 op=LOAD Jun 25 16:26:50.698000 audit[2118]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=1995 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131353530313638363737653163363036393231623138313930663730 Jun 25 16:26:50.710366 systemd[1]: Started cri-containerd-d1f150b0b78d261ff175950b405b2ac1186f788c71884061b2f7583d0a7488e5.scope - libcontainer container d1f150b0b78d261ff175950b405b2ac1186f788c71884061b2f7583d0a7488e5. Jun 25 16:26:50.723000 audit: BPF prog-id=68 op=LOAD Jun 25 16:26:50.724000 audit: BPF prog-id=69 op=LOAD Jun 25 16:26:50.724000 audit[2116]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=1991 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393131643235303738646230636135666134353530613939366263 Jun 25 16:26:50.724000 audit: BPF prog-id=70 op=LOAD Jun 25 16:26:50.724000 audit[2116]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=1991 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393131643235303738646230636135666134353530613939366263 Jun 25 16:26:50.724000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:26:50.724000 audit: BPF prog-id=69 op=UNLOAD Jun 25 16:26:50.724000 audit: BPF prog-id=71 op=LOAD Jun 25 16:26:50.724000 audit[2116]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=1991 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393131643235303738646230636135666134353530613939366263 Jun 25 16:26:50.743000 audit: BPF prog-id=72 op=LOAD Jun 25 16:26:50.744000 audit: BPF prog-id=73 op=LOAD Jun 25 16:26:50.744000 audit[2134]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=1996 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663135306230623738643236316666313735393530623430356232 Jun 25 16:26:50.744000 audit: BPF prog-id=74 op=LOAD Jun 25 16:26:50.744000 audit[2134]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=1996 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663135306230623738643236316666313735393530623430356232 Jun 25 16:26:50.744000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:26:50.744000 audit: BPF prog-id=73 op=UNLOAD Jun 25 16:26:50.744000 audit: BPF prog-id=75 op=LOAD Jun 25 16:26:50.744000 audit[2134]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=1996 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:50.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663135306230623738643236316666313735393530623430356232 Jun 25 16:26:50.809789 containerd[1277]: time="2024-06-25T16:26:50.809706899Z" level=info msg="StartContainer for \"a1550168677e1c606921b18190f700821c0dce8d9b3a61807c61b54a6385811d\" returns successfully" Jun 25 16:26:50.824235 containerd[1277]: time="2024-06-25T16:26:50.824114533Z" level=info msg="StartContainer for \"0f911d25078db0ca5fa4550a996bce969c7040d0d8efc8e9f3d0fdd1c732c516\" returns successfully" Jun 25 16:26:50.834948 containerd[1277]: time="2024-06-25T16:26:50.834876374Z" level=info msg="StartContainer for \"d1f150b0b78d261ff175950b405b2ac1186f788c71884061b2f7583d0a7488e5\" returns successfully" Jun 25 16:26:50.867150 kubelet[1934]: E0625 16:26:50.867093 1934 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://161.35.235.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 161.35.235.79:6443: connect: connection refused Jun 25 16:26:50.887404 kubelet[1934]: E0625 16:26:50.887340 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:50.904863 kubelet[1934]: E0625 16:26:50.904816 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:50.910184 kubelet[1934]: E0625 16:26:50.910139 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:51.912742 kubelet[1934]: E0625 16:26:51.912685 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:51.935441 kubelet[1934]: I0625 16:26:51.935398 1934 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:52.470000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=526918 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:52.470000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000a50000 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:26:52.470000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:52.474000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:52.474000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0002b63c0 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:26:52.474000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:52.914695 kubelet[1934]: E0625 16:26:52.914074 1934 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:53.712000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=526918 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:53.712000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c00722d5c0 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:26:53.712000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:26:53.713000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:53.713000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=43 a1=c00349dc80 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:26:53.713000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:26:53.720000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=526920 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:53.720000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4d a1=c00722d860 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:26:53.720000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:26:53.721000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:53.721000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4d a1=c00349dd60 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:26:53.721000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:26:53.721000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=526918 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:53.721000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4d a1=c00722d920 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:26:53.721000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:26:53.729000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=526914 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:53.729000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=63 a1=c00619ffb0 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:26:53.729000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:26:53.885728 kubelet[1934]: E0625 16:26:53.885665 1934 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3815.2.4-0-d0607f9d2c\" not found" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:53.941548 kubelet[1934]: I0625 16:26:53.941488 1934 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:54.796199 kubelet[1934]: I0625 16:26:54.796142 1934 apiserver.go:52] "Watching apiserver" Jun 25 16:26:54.816907 kubelet[1934]: I0625 16:26:54.816861 1934 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 16:26:56.323738 systemd[1]: Reloading. Jun 25 16:26:56.631097 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:26:56.795000 audit: BPF prog-id=76 op=LOAD Jun 25 16:26:56.797637 kernel: kauditd_printk_skb: 86 callbacks suppressed Jun 25 16:26:56.797810 kernel: audit: type=1334 audit(1719332816.795:336): prog-id=76 op=LOAD Jun 25 16:26:56.799000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:26:56.803170 kernel: audit: type=1334 audit(1719332816.799:337): prog-id=64 op=UNLOAD Jun 25 16:26:56.805000 audit: BPF prog-id=77 op=LOAD Jun 25 16:26:56.809150 kernel: audit: type=1334 audit(1719332816.805:338): prog-id=77 op=LOAD Jun 25 16:26:56.809310 kernel: audit: type=1334 audit(1719332816.808:339): prog-id=38 op=UNLOAD Jun 25 16:26:56.808000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:26:56.816671 kernel: audit: type=1334 audit(1719332816.808:340): prog-id=78 op=LOAD Jun 25 16:26:56.816848 kernel: audit: type=1334 audit(1719332816.808:341): prog-id=79 op=LOAD Jun 25 16:26:56.816889 kernel: audit: type=1334 audit(1719332816.808:342): prog-id=39 op=UNLOAD Jun 25 16:26:56.808000 audit: BPF prog-id=78 op=LOAD Jun 25 16:26:56.808000 audit: BPF prog-id=79 op=LOAD Jun 25 16:26:56.808000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:26:56.808000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:26:56.821392 kernel: audit: type=1334 audit(1719332816.808:343): prog-id=40 op=UNLOAD Jun 25 16:26:56.810000 audit: BPF prog-id=80 op=LOAD Jun 25 16:26:56.825502 kernel: audit: type=1334 audit(1719332816.810:344): prog-id=80 op=LOAD Jun 25 16:26:56.825627 kernel: audit: type=1334 audit(1719332816.810:345): prog-id=81 op=LOAD Jun 25 16:26:56.810000 audit: BPF prog-id=81 op=LOAD Jun 25 16:26:56.810000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:26:56.810000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:26:56.810000 audit: BPF prog-id=82 op=LOAD Jun 25 16:26:56.810000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:26:56.810000 audit: BPF prog-id=83 op=LOAD Jun 25 16:26:56.810000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:26:56.812000 audit: BPF prog-id=84 op=LOAD Jun 25 16:26:56.812000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:26:56.815000 audit: BPF prog-id=85 op=LOAD Jun 25 16:26:56.815000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:26:56.819000 audit: BPF prog-id=86 op=LOAD Jun 25 16:26:56.819000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:26:56.824000 audit: BPF prog-id=87 op=LOAD Jun 25 16:26:56.824000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:26:56.824000 audit: BPF prog-id=88 op=LOAD Jun 25 16:26:56.824000 audit: BPF prog-id=89 op=LOAD Jun 25 16:26:56.824000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:26:56.824000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:26:56.825000 audit: BPF prog-id=90 op=LOAD Jun 25 16:26:56.825000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:26:56.825000 audit: BPF prog-id=91 op=LOAD Jun 25 16:26:56.825000 audit: BPF prog-id=92 op=LOAD Jun 25 16:26:56.825000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:26:56.825000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:26:56.827000 audit: BPF prog-id=93 op=LOAD Jun 25 16:26:56.827000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:26:56.828000 audit: BPF prog-id=94 op=LOAD Jun 25 16:26:56.828000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:26:56.829000 audit: BPF prog-id=95 op=LOAD Jun 25 16:26:56.829000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:26:56.860659 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:56.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:56.889503 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:26:56.889787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:56.889874 systemd[1]: kubelet.service: Consumed 1.016s CPU time. Jun 25 16:26:56.894984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:26:57.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:57.100667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:26:57.230948 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:26:57.230948 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:26:57.230948 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:26:57.231611 kubelet[2274]: I0625 16:26:57.231068 2274 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:26:57.239197 kubelet[2274]: I0625 16:26:57.239157 2274 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 16:26:57.239197 kubelet[2274]: I0625 16:26:57.239188 2274 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:26:57.239527 kubelet[2274]: I0625 16:26:57.239496 2274 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 16:26:57.241644 kubelet[2274]: I0625 16:26:57.241602 2274 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:26:57.246807 kubelet[2274]: I0625 16:26:57.245599 2274 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:26:57.266752 kubelet[2274]: I0625 16:26:57.266711 2274 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:26:57.267464 kubelet[2274]: I0625 16:26:57.267399 2274 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:26:57.267874 kubelet[2274]: I0625 16:26:57.267619 2274 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3815.2.4-0-d0607f9d2c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:26:57.268195 kubelet[2274]: I0625 16:26:57.268172 2274 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:26:57.268305 kubelet[2274]: I0625 16:26:57.268291 2274 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:26:57.268462 kubelet[2274]: I0625 16:26:57.268445 2274 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:26:57.268765 kubelet[2274]: I0625 16:26:57.268700 2274 kubelet.go:400] "Attempting to sync node with API server" Jun 25 16:26:57.268869 kubelet[2274]: I0625 16:26:57.268858 2274 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:26:57.268995 kubelet[2274]: I0625 16:26:57.268984 2274 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:26:57.269115 kubelet[2274]: I0625 16:26:57.269102 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:26:57.282190 kubelet[2274]: I0625 16:26:57.282142 2274 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:26:57.282650 kubelet[2274]: I0625 16:26:57.282627 2274 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:26:57.283370 kubelet[2274]: I0625 16:26:57.283352 2274 server.go:1264] "Started kubelet" Jun 25 16:26:57.287942 kubelet[2274]: I0625 16:26:57.287882 2274 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:26:57.289510 kubelet[2274]: I0625 16:26:57.289448 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:26:57.290009 kubelet[2274]: I0625 16:26:57.289983 2274 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:26:57.297525 kubelet[2274]: I0625 16:26:57.295853 2274 server.go:455] "Adding debug handlers to kubelet server" Jun 25 16:26:57.305550 kubelet[2274]: I0625 16:26:57.305262 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:26:57.323455 kubelet[2274]: I0625 16:26:57.323429 2274 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:26:57.323798 kubelet[2274]: I0625 16:26:57.323783 2274 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 16:26:57.324303 kubelet[2274]: I0625 16:26:57.324246 2274 reconciler.go:26] "Reconciler: start to sync state" Jun 25 16:26:57.326441 kubelet[2274]: E0625 16:26:57.326415 2274 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:26:57.335467 kubelet[2274]: I0625 16:26:57.335428 2274 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:26:57.342149 kubelet[2274]: I0625 16:26:57.342098 2274 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:26:57.344047 kubelet[2274]: I0625 16:26:57.343997 2274 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:26:57.401569 kubelet[2274]: I0625 16:26:57.401518 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:26:57.407062 kubelet[2274]: I0625 16:26:57.407017 2274 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:26:57.407062 kubelet[2274]: I0625 16:26:57.407057 2274 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:26:57.407295 kubelet[2274]: I0625 16:26:57.407082 2274 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:26:57.407349 kubelet[2274]: I0625 16:26:57.407320 2274 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:26:57.407349 kubelet[2274]: I0625 16:26:57.407335 2274 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:26:57.407475 kubelet[2274]: I0625 16:26:57.407359 2274 policy_none.go:49] "None policy: Start" Jun 25 16:26:57.409336 kubelet[2274]: I0625 16:26:57.409302 2274 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:26:57.409336 kubelet[2274]: I0625 16:26:57.409345 2274 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:26:57.409648 kubelet[2274]: I0625 16:26:57.409589 2274 state_mem.go:75] "Updated machine memory state" Jun 25 16:26:57.410593 kubelet[2274]: I0625 16:26:57.410480 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:26:57.410593 kubelet[2274]: I0625 16:26:57.410519 2274 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:26:57.410593 kubelet[2274]: I0625 16:26:57.410541 2274 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 16:26:57.410888 kubelet[2274]: E0625 16:26:57.410598 2274 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:26:57.427957 kubelet[2274]: I0625 16:26:57.427783 2274 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.431285 kubelet[2274]: I0625 16:26:57.430732 2274 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:26:57.432392 kubelet[2274]: I0625 16:26:57.432071 2274 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 16:26:57.432392 kubelet[2274]: I0625 16:26:57.432221 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:26:57.442169 kubelet[2274]: I0625 16:26:57.441426 2274 kubelet_node_status.go:112] "Node was previously registered" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.444326 kubelet[2274]: I0625 16:26:57.444293 2274 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.511635 kubelet[2274]: I0625 16:26:57.511487 2274 topology_manager.go:215] "Topology Admit Handler" podUID="d3de23907c7c171f51c783fbbeb46e9f" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.511948 kubelet[2274]: I0625 16:26:57.511928 2274 topology_manager.go:215] "Topology Admit Handler" podUID="d69b9d614162bb448252d41244e461f6" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.512150 kubelet[2274]: I0625 16:26:57.512133 2274 topology_manager.go:215] "Topology Admit Handler" podUID="39e1cbb400f50359088988ce9245826a" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.527630 kubelet[2274]: W0625 16:26:57.527190 2274 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:26:57.527630 kubelet[2274]: I0625 16:26:57.527537 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3de23907c7c171f51c783fbbeb46e9f-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-0-d0607f9d2c\" (UID: \"d3de23907c7c171f51c783fbbeb46e9f\") " pod="kube-system/kube-scheduler-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.527851 kubelet[2274]: I0625 16:26:57.527684 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.527851 kubelet[2274]: I0625 16:26:57.527721 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.527851 kubelet[2274]: I0625 16:26:57.527768 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d69b9d614162bb448252d41244e461f6-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-0-d0607f9d2c\" (UID: \"d69b9d614162bb448252d41244e461f6\") " pod="kube-system/kube-apiserver-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.527851 kubelet[2274]: I0625 16:26:57.527820 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d69b9d614162bb448252d41244e461f6-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-0-d0607f9d2c\" (UID: \"d69b9d614162bb448252d41244e461f6\") " pod="kube-system/kube-apiserver-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.527851 kubelet[2274]: I0625 16:26:57.527844 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d69b9d614162bb448252d41244e461f6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-0-d0607f9d2c\" (UID: \"d69b9d614162bb448252d41244e461f6\") " pod="kube-system/kube-apiserver-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.528109 kubelet[2274]: I0625 16:26:57.527887 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.528109 kubelet[2274]: I0625 16:26:57.527911 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.528109 kubelet[2274]: I0625 16:26:57.527958 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39e1cbb400f50359088988ce9245826a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-0-d0607f9d2c\" (UID: \"39e1cbb400f50359088988ce9245826a\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" Jun 25 16:26:57.531950 kubelet[2274]: W0625 16:26:57.531889 2274 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:26:57.532135 kubelet[2274]: W0625 16:26:57.532120 2274 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:26:57.829235 kubelet[2274]: E0625 16:26:57.829091 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:57.833708 kubelet[2274]: E0625 16:26:57.833614 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:57.835073 kubelet[2274]: E0625 16:26:57.834877 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:58.277686 kubelet[2274]: I0625 16:26:58.277608 2274 apiserver.go:52] "Watching apiserver" Jun 25 16:26:58.327103 kubelet[2274]: I0625 16:26:58.325080 2274 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 16:26:58.476165 kubelet[2274]: E0625 16:26:58.476102 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:58.478898 kubelet[2274]: E0625 16:26:58.478793 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:58.480055 kubelet[2274]: E0625 16:26:58.480003 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:26:58.640461 kubelet[2274]: I0625 16:26:58.640266 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.4-0-d0607f9d2c" podStartSLOduration=1.6402369829999999 podStartE2EDuration="1.640236983s" podCreationTimestamp="2024-06-25 16:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:26:58.626476067 +0000 UTC m=+1.503079301" watchObservedRunningTime="2024-06-25 16:26:58.640236983 +0000 UTC m=+1.516840215" Jun 25 16:26:58.732739 kubelet[2274]: I0625 16:26:58.732635 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.4-0-d0607f9d2c" podStartSLOduration=1.7326066230000001 podStartE2EDuration="1.732606623s" podCreationTimestamp="2024-06-25 16:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:26:58.672911603 +0000 UTC m=+1.549514836" watchObservedRunningTime="2024-06-25 16:26:58.732606623 +0000 UTC m=+1.609209856" Jun 25 16:26:59.478647 kubelet[2274]: E0625 16:26:59.478589 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:00.325663 kubelet[2274]: E0625 16:27:00.325614 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:00.480913 kubelet[2274]: E0625 16:27:00.480822 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:00.559017 kubelet[2274]: I0625 16:27:00.558932 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.4-0-d0607f9d2c" podStartSLOduration=3.558893147 podStartE2EDuration="3.558893147s" podCreationTimestamp="2024-06-25 16:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:26:58.734507659 +0000 UTC m=+1.611110889" watchObservedRunningTime="2024-06-25 16:27:00.558893147 +0000 UTC m=+3.435496371" Jun 25 16:27:01.869320 kubelet[2274]: E0625 16:27:01.869265 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:02.271459 kubelet[2274]: E0625 16:27:02.264633 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:02.508110 kubelet[2274]: E0625 16:27:02.508067 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:02.510605 kubelet[2274]: E0625 16:27:02.510566 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:02.910537 sudo[1438]: pam_unix(sudo:session): session closed for user root Jun 25 16:27:02.909000 audit[1438]: USER_END pid=1438 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:02.911426 kernel: kauditd_printk_skb: 32 callbacks suppressed Jun 25 16:27:02.911502 kernel: audit: type=1106 audit(1719332822.909:378): pid=1438 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:02.912000 audit[1438]: CRED_DISP pid=1438 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:02.929782 kernel: audit: type=1104 audit(1719332822.912:379): pid=1438 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:02.931398 sshd[1435]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:02.938333 systemd[1]: sshd@6-161.35.235.79:22-139.178.89.65:41770.service: Deactivated successfully. Jun 25 16:27:02.939924 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:27:02.940489 systemd[1]: session-7.scope: Consumed 6.640s CPU time. Jun 25 16:27:02.933000 audit[1435]: USER_END pid=1435 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:02.952270 kernel: audit: type=1106 audit(1719332822.933:380): pid=1435 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:02.952438 kernel: audit: type=1104 audit(1719332822.933:381): pid=1435 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:02.933000 audit[1435]: CRED_DISP pid=1435 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:02.951081 systemd-logind[1266]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:27:02.955903 systemd-logind[1266]: Removed session 7. Jun 25 16:27:02.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-161.35.235.79:22-139.178.89.65:41770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:02.965178 kernel: audit: type=1131 audit(1719332822.937:382): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-161.35.235.79:22-139.178.89.65:41770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:03.512583 kubelet[2274]: E0625 16:27:03.512531 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:05.839111 update_engine[1267]: I0625 16:27:05.838126 1267 update_attempter.cc:509] Updating boot flags... Jun 25 16:27:05.945783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2356) Jun 25 16:27:06.130299 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2355) Jun 25 16:27:06.223821 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2355) Jun 25 16:27:10.266000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:10.273073 kernel: audit: type=1400 audit(1719332830.266:383): avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:10.266000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0018b5820 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:10.284992 kernel: audit: type=1300 audit(1719332830.266:383): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0018b5820 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:10.285190 kernel: audit: type=1327 audit(1719332830.266:383): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:10.266000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:10.268000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:10.295094 kernel: audit: type=1400 audit(1719332830.268:384): avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:10.268000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0018b5a20 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:10.302081 kernel: audit: type=1300 audit(1719332830.268:384): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0018b5a20 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:10.268000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:10.308090 kernel: audit: type=1327 audit(1719332830.268:384): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:10.268000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:10.313077 kernel: audit: type=1400 audit(1719332830.268:385): avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:10.268000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0018b5a40 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:10.320068 kernel: audit: type=1300 audit(1719332830.268:385): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0018b5a40 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:10.268000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:10.327090 kernel: audit: type=1327 audit(1719332830.268:385): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:10.327244 kernel: audit: type=1400 audit(1719332830.268:386): avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:10.268000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:10.268000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0018b5be0 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:10.268000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:10.416000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=526945 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:27:10.416000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00100b800 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:10.416000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:12.127410 kubelet[2274]: I0625 16:27:12.127379 2274 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:27:12.128484 containerd[1277]: time="2024-06-25T16:27:12.128427286Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:27:12.128923 kubelet[2274]: I0625 16:27:12.128704 2274 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:27:13.196050 kubelet[2274]: I0625 16:27:13.195957 2274 topology_manager.go:215] "Topology Admit Handler" podUID="b9dce082-bcdb-40d3-ac13-fe5873f69d9c" podNamespace="kube-system" podName="kube-proxy-mk6nx" Jun 25 16:27:13.208548 systemd[1]: Created slice kubepods-besteffort-podb9dce082_bcdb_40d3_ac13_fe5873f69d9c.slice - libcontainer container kubepods-besteffort-podb9dce082_bcdb_40d3_ac13_fe5873f69d9c.slice. Jun 25 16:27:13.306069 kubelet[2274]: I0625 16:27:13.305982 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9dce082-bcdb-40d3-ac13-fe5873f69d9c-xtables-lock\") pod \"kube-proxy-mk6nx\" (UID: \"b9dce082-bcdb-40d3-ac13-fe5873f69d9c\") " pod="kube-system/kube-proxy-mk6nx" Jun 25 16:27:13.306459 kubelet[2274]: I0625 16:27:13.306420 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkrnc\" (UniqueName: \"kubernetes.io/projected/b9dce082-bcdb-40d3-ac13-fe5873f69d9c-kube-api-access-kkrnc\") pod \"kube-proxy-mk6nx\" (UID: \"b9dce082-bcdb-40d3-ac13-fe5873f69d9c\") " pod="kube-system/kube-proxy-mk6nx" Jun 25 16:27:13.306677 kubelet[2274]: I0625 16:27:13.306644 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9dce082-bcdb-40d3-ac13-fe5873f69d9c-kube-proxy\") pod \"kube-proxy-mk6nx\" (UID: \"b9dce082-bcdb-40d3-ac13-fe5873f69d9c\") " pod="kube-system/kube-proxy-mk6nx" Jun 25 16:27:13.306813 kubelet[2274]: I0625 16:27:13.306795 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9dce082-bcdb-40d3-ac13-fe5873f69d9c-lib-modules\") pod \"kube-proxy-mk6nx\" (UID: \"b9dce082-bcdb-40d3-ac13-fe5873f69d9c\") " pod="kube-system/kube-proxy-mk6nx" Jun 25 16:27:13.357540 kubelet[2274]: I0625 16:27:13.357473 2274 topology_manager.go:215] "Topology Admit Handler" podUID="0847fc12-765b-4109-91d7-36ec9c32c0ac" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-n82z4" Jun 25 16:27:13.368822 systemd[1]: Created slice kubepods-besteffort-pod0847fc12_765b_4109_91d7_36ec9c32c0ac.slice - libcontainer container kubepods-besteffort-pod0847fc12_765b_4109_91d7_36ec9c32c0ac.slice. Jun 25 16:27:13.407489 kubelet[2274]: I0625 16:27:13.407413 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5sk\" (UniqueName: \"kubernetes.io/projected/0847fc12-765b-4109-91d7-36ec9c32c0ac-kube-api-access-ml5sk\") pod \"tigera-operator-76ff79f7fd-n82z4\" (UID: \"0847fc12-765b-4109-91d7-36ec9c32c0ac\") " pod="tigera-operator/tigera-operator-76ff79f7fd-n82z4" Jun 25 16:27:13.408148 kubelet[2274]: I0625 16:27:13.408101 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0847fc12-765b-4109-91d7-36ec9c32c0ac-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-n82z4\" (UID: \"0847fc12-765b-4109-91d7-36ec9c32c0ac\") " pod="tigera-operator/tigera-operator-76ff79f7fd-n82z4" Jun 25 16:27:13.520395 kubelet[2274]: E0625 16:27:13.520233 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:13.522741 containerd[1277]: time="2024-06-25T16:27:13.521962769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mk6nx,Uid:b9dce082-bcdb-40d3-ac13-fe5873f69d9c,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:13.586971 containerd[1277]: time="2024-06-25T16:27:13.586814409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:13.587345 containerd[1277]: time="2024-06-25T16:27:13.587137872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:13.587345 containerd[1277]: time="2024-06-25T16:27:13.587207665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:13.587345 containerd[1277]: time="2024-06-25T16:27:13.587224949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:13.628456 systemd[1]: Started cri-containerd-bde8a34505aa20b3360c226eaac33c71a3c3fdbc9d6b48fdd108fa92932124a0.scope - libcontainer container bde8a34505aa20b3360c226eaac33c71a3c3fdbc9d6b48fdd108fa92932124a0. Jun 25 16:27:13.647000 audit: BPF prog-id=96 op=LOAD Jun 25 16:27:13.648000 audit: BPF prog-id=97 op=LOAD Jun 25 16:27:13.648000 audit[2385]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2375 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:13.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653861333435303561613230623333363063323236656161633333 Jun 25 16:27:13.648000 audit: BPF prog-id=98 op=LOAD Jun 25 16:27:13.648000 audit[2385]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2375 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:13.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653861333435303561613230623333363063323236656161633333 Jun 25 16:27:13.648000 audit: BPF prog-id=98 op=UNLOAD Jun 25 16:27:13.649000 audit: BPF prog-id=97 op=UNLOAD Jun 25 16:27:13.649000 audit: BPF prog-id=99 op=LOAD Jun 25 16:27:13.649000 audit[2385]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2375 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:13.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264653861333435303561613230623333363063323236656161633333 Jun 25 16:27:13.675597 containerd[1277]: time="2024-06-25T16:27:13.675542052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-n82z4,Uid:0847fc12-765b-4109-91d7-36ec9c32c0ac,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:27:13.688249 containerd[1277]: time="2024-06-25T16:27:13.688124720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mk6nx,Uid:b9dce082-bcdb-40d3-ac13-fe5873f69d9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bde8a34505aa20b3360c226eaac33c71a3c3fdbc9d6b48fdd108fa92932124a0\"" Jun 25 16:27:13.690866 kubelet[2274]: E0625 16:27:13.690828 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:13.703470 containerd[1277]: time="2024-06-25T16:27:13.703402324Z" level=info msg="CreateContainer within sandbox \"bde8a34505aa20b3360c226eaac33c71a3c3fdbc9d6b48fdd108fa92932124a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:27:13.733874 containerd[1277]: time="2024-06-25T16:27:13.732184633Z" level=info msg="CreateContainer within sandbox \"bde8a34505aa20b3360c226eaac33c71a3c3fdbc9d6b48fdd108fa92932124a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd795adb628ac682ff0066c63b7857b9e98d6a75e7b58f316e3df83333d9ee91\"" Jun 25 16:27:13.737262 containerd[1277]: time="2024-06-25T16:27:13.735590207Z" level=info msg="StartContainer for \"bd795adb628ac682ff0066c63b7857b9e98d6a75e7b58f316e3df83333d9ee91\"" Jun 25 16:27:13.763194 containerd[1277]: time="2024-06-25T16:27:13.762949289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:13.763194 containerd[1277]: time="2024-06-25T16:27:13.763061071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:13.763194 containerd[1277]: time="2024-06-25T16:27:13.763093421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:13.763651 containerd[1277]: time="2024-06-25T16:27:13.763117902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:13.788359 systemd[1]: Started cri-containerd-bd795adb628ac682ff0066c63b7857b9e98d6a75e7b58f316e3df83333d9ee91.scope - libcontainer container bd795adb628ac682ff0066c63b7857b9e98d6a75e7b58f316e3df83333d9ee91. Jun 25 16:27:13.821387 systemd[1]: Started cri-containerd-d34f407b5e06a6ae12ee107ef0a4276c096bd4da4f601a4accb798cb90323ae2.scope - libcontainer container d34f407b5e06a6ae12ee107ef0a4276c096bd4da4f601a4accb798cb90323ae2. Jun 25 16:27:13.833000 audit: BPF prog-id=100 op=LOAD Jun 25 16:27:13.833000 audit[2432]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2375 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:13.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373935616462363238616336383266663030363663363362373835 Jun 25 16:27:13.833000 audit: BPF prog-id=101 op=LOAD Jun 25 16:27:13.833000 audit[2432]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2375 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:13.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373935616462363238616336383266663030363663363362373835 Jun 25 16:27:13.833000 audit: BPF prog-id=101 op=UNLOAD Jun 25 16:27:13.833000 audit: BPF prog-id=100 op=UNLOAD Jun 25 16:27:13.833000 audit: BPF prog-id=102 op=LOAD Jun 25 16:27:13.833000 audit[2432]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2375 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:13.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264373935616462363238616336383266663030363663363362373835 Jun 25 16:27:13.848000 audit: BPF prog-id=103 op=LOAD Jun 25 16:27:13.849000 audit: BPF prog-id=104 op=LOAD Jun 25 16:27:13.849000 audit[2434]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2421 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:13.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346634303762356530366136616531326565313037656630613432 Jun 25 16:27:13.849000 audit: BPF prog-id=105 op=LOAD Jun 25 16:27:13.849000 audit[2434]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2421 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:13.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346634303762356530366136616531326565313037656630613432 Jun 25 16:27:13.850000 audit: BPF prog-id=105 op=UNLOAD Jun 25 16:27:13.850000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:27:13.850000 audit: BPF prog-id=106 op=LOAD Jun 25 16:27:13.850000 audit[2434]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2421 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:13.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433346634303762356530366136616531326565313037656630613432 Jun 25 16:27:13.871145 containerd[1277]: time="2024-06-25T16:27:13.871066557Z" level=info msg="StartContainer for \"bd795adb628ac682ff0066c63b7857b9e98d6a75e7b58f316e3df83333d9ee91\" returns successfully" Jun 25 16:27:13.924582 containerd[1277]: time="2024-06-25T16:27:13.924511035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-n82z4,Uid:0847fc12-765b-4109-91d7-36ec9c32c0ac,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d34f407b5e06a6ae12ee107ef0a4276c096bd4da4f601a4accb798cb90323ae2\"" Jun 25 16:27:13.934392 containerd[1277]: time="2024-06-25T16:27:13.934087526Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:27:14.111000 audit[2506]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.111000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd617078d0 a2=0 a3=7ffd617078bc items=0 ppid=2449 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:27:14.113000 audit[2508]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.113000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf58a9c40 a2=0 a3=7ffcf58a9c2c items=0 ppid=2449 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:27:14.115000 audit[2509]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.115000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2b7b0df0 a2=0 a3=7ffc2b7b0ddc items=0 ppid=2449 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:27:14.116000 audit[2507]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.116000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb3da8060 a2=0 a3=7fffb3da804c items=0 ppid=2449 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:27:14.120000 audit[2510]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.120000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff036a920 a2=0 a3=7ffff036a90c items=0 ppid=2449 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:27:14.126000 audit[2511]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.126000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe99308ad0 a2=0 a3=7ffe99308abc items=0 ppid=2449 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.126000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:27:14.220000 audit[2512]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.220000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffca87bf010 a2=0 a3=7ffca87beffc items=0 ppid=2449 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:27:14.228000 audit[2514]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.228000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff7899fa70 a2=0 a3=7fff7899fa5c items=0 ppid=2449 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:27:14.238000 audit[2517]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.238000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd78bf1ac0 a2=0 a3=7ffd78bf1aac items=0 ppid=2449 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:27:14.242000 audit[2518]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.242000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe50bd9d90 a2=0 a3=7ffe50bd9d7c items=0 ppid=2449 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:27:14.248000 audit[2520]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.248000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff38ba1f30 a2=0 a3=7fff38ba1f1c items=0 ppid=2449 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:27:14.251000 audit[2521]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.251000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8e73d370 a2=0 a3=7ffd8e73d35c items=0 ppid=2449 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:27:14.257000 audit[2523]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.257000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf834bf30 a2=0 a3=7ffdf834bf1c items=0 ppid=2449 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:27:14.267000 audit[2526]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.267000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffef6d4c210 a2=0 a3=7ffef6d4c1fc items=0 ppid=2449 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:27:14.270000 audit[2527]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.270000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc68eb02f0 a2=0 a3=7ffc68eb02dc items=0 ppid=2449 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:27:14.281000 audit[2529]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.281000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3fda1b10 a2=0 a3=7fff3fda1afc items=0 ppid=2449 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:27:14.288000 audit[2530]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.288000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7ab32300 a2=0 a3=7ffd7ab322ec items=0 ppid=2449 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:27:14.295000 audit[2532]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.295000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc1acbcb80 a2=0 a3=7ffc1acbcb6c items=0 ppid=2449 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:27:14.306000 audit[2535]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.306000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdb3de7cd0 a2=0 a3=7ffdb3de7cbc items=0 ppid=2449 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:27:14.320000 audit[2538]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.320000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd7df3d3e0 a2=0 a3=7ffd7df3d3cc items=0 ppid=2449 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:27:14.324000 audit[2539]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.324000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffde45588f0 a2=0 a3=7ffde45588dc items=0 ppid=2449 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:27:14.333000 audit[2541]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.333000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcc192f380 a2=0 a3=7ffcc192f36c items=0 ppid=2449 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:27:14.344000 audit[2544]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.344000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea31842d0 a2=0 a3=7ffea31842bc items=0 ppid=2449 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:27:14.348000 audit[2545]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.348000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec2d1bdf0 a2=0 a3=7ffec2d1bddc items=0 ppid=2449 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:27:14.354000 audit[2547]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:14.354000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcbd1ad1d0 a2=0 a3=7ffcbd1ad1bc items=0 ppid=2449 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:27:14.402000 audit[2553]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:14.402000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd7644fe20 a2=0 a3=7ffd7644fe0c items=0 ppid=2449 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:14.411000 audit[2553]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:14.411000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd7644fe20 a2=0 a3=7ffd7644fe0c items=0 ppid=2449 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:14.415000 audit[2559]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.415000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff24f1c710 a2=0 a3=7fff24f1c6fc items=0 ppid=2449 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:27:14.422000 audit[2561]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.422000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdb1661cb0 a2=0 a3=7ffdb1661c9c items=0 ppid=2449 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:27:14.434000 audit[2564]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.434000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff034cea20 a2=0 a3=7fff034cea0c items=0 ppid=2449 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:27:14.450000 audit[2565]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.450000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7ee49b20 a2=0 a3=7ffc7ee49b0c items=0 ppid=2449 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.450000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:27:14.460000 audit[2567]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.460000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc2090e7d0 a2=0 a3=7ffc2090e7bc items=0 ppid=2449 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:27:14.463000 audit[2568]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2568 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.463000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff502408e0 a2=0 a3=7fff502408cc items=0 ppid=2449 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:27:14.468000 audit[2570]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2570 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.468000 audit[2570]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff1e9a01b0 a2=0 a3=7fff1e9a019c items=0 ppid=2449 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:27:14.478000 audit[2573]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.478000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffc9a96600 a2=0 a3=7fffc9a965ec items=0 ppid=2449 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:27:14.481000 audit[2574]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.481000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff66ea1610 a2=0 a3=7fff66ea15fc items=0 ppid=2449 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:27:14.487000 audit[2576]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.487000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd77614d20 a2=0 a3=7ffd77614d0c items=0 ppid=2449 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:27:14.491000 audit[2577]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.491000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec139a410 a2=0 a3=7ffec139a3fc items=0 ppid=2449 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.491000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:27:14.499000 audit[2579]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.499000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec3b0a1f0 a2=0 a3=7ffec3b0a1dc items=0 ppid=2449 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:27:14.510000 audit[2582]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.510000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffdf3fee30 a2=0 a3=7fffdf3fee1c items=0 ppid=2449 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:27:14.520000 audit[2585]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.520000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd00f57370 a2=0 a3=7ffd00f5735c items=0 ppid=2449 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:27:14.523000 audit[2586]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.523000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe491b6990 a2=0 a3=7ffe491b697c items=0 ppid=2449 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.523000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:27:14.528000 audit[2588]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.528000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc56e168b0 a2=0 a3=7ffc56e1689c items=0 ppid=2449 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.528000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:27:14.537000 audit[2591]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.537000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffcd92d9310 a2=0 a3=7ffcd92d92fc items=0 ppid=2449 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:27:14.539000 audit[2592]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.539000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde47b6c60 a2=0 a3=7ffde47b6c4c items=0 ppid=2449 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:27:14.545000 audit[2594]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.545000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc7aa1c800 a2=0 a3=7ffc7aa1c7ec items=0 ppid=2449 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:27:14.548000 audit[2595]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.548000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc779157b0 a2=0 a3=7ffc7791579c items=0 ppid=2449 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:27:14.554064 kubelet[2274]: E0625 16:27:14.553995 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:14.564000 audit[2597]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2597 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.564000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffda4425140 a2=0 a3=7ffda442512c items=0 ppid=2449 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:27:14.572454 kubelet[2274]: I0625 16:27:14.572387 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mk6nx" podStartSLOduration=1.572360601 podStartE2EDuration="1.572360601s" podCreationTimestamp="2024-06-25 16:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:27:14.571071672 +0000 UTC m=+17.447674905" watchObservedRunningTime="2024-06-25 16:27:14.572360601 +0000 UTC m=+17.448963831" Jun 25 16:27:14.587000 audit[2600]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:27:14.587000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdc900cf50 a2=0 a3=7ffdc900cf3c items=0 ppid=2449 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:27:14.594000 audit[2602]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:27:14.594000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffdbfa74060 a2=0 a3=7ffdbfa7404c items=0 ppid=2449 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.594000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:14.595000 audit[2602]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:27:14.595000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdbfa74060 a2=0 a3=7ffdbfa7404c items=0 ppid=2449 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:14.595000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:15.249918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428371118.mount: Deactivated successfully. Jun 25 16:27:17.579075 containerd[1277]: time="2024-06-25T16:27:17.578966926Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:17.585570 containerd[1277]: time="2024-06-25T16:27:17.585466067Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076104" Jun 25 16:27:17.587403 containerd[1277]: time="2024-06-25T16:27:17.587329309Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:17.592261 containerd[1277]: time="2024-06-25T16:27:17.592168487Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:17.595997 containerd[1277]: time="2024-06-25T16:27:17.595910730Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:17.599484 containerd[1277]: time="2024-06-25T16:27:17.599363249Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.665123383s" Jun 25 16:27:17.599484 containerd[1277]: time="2024-06-25T16:27:17.599453993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:27:17.606395 containerd[1277]: time="2024-06-25T16:27:17.606343028Z" level=info msg="CreateContainer within sandbox \"d34f407b5e06a6ae12ee107ef0a4276c096bd4da4f601a4accb798cb90323ae2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:27:17.632851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509241865.mount: Deactivated successfully. Jun 25 16:27:17.647108 containerd[1277]: time="2024-06-25T16:27:17.647054721Z" level=info msg="CreateContainer within sandbox \"d34f407b5e06a6ae12ee107ef0a4276c096bd4da4f601a4accb798cb90323ae2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c447972f1c7d512b6f86b5f4b44174d51c137ae01af14ce8a0b5cd91a8a0c2a9\"" Jun 25 16:27:17.648861 containerd[1277]: time="2024-06-25T16:27:17.648179682Z" level=info msg="StartContainer for \"c447972f1c7d512b6f86b5f4b44174d51c137ae01af14ce8a0b5cd91a8a0c2a9\"" Jun 25 16:27:17.693455 systemd[1]: Started cri-containerd-c447972f1c7d512b6f86b5f4b44174d51c137ae01af14ce8a0b5cd91a8a0c2a9.scope - libcontainer container c447972f1c7d512b6f86b5f4b44174d51c137ae01af14ce8a0b5cd91a8a0c2a9. Jun 25 16:27:17.716000 audit: BPF prog-id=107 op=LOAD Jun 25 16:27:17.718655 kernel: kauditd_printk_skb: 193 callbacks suppressed Jun 25 16:27:17.718766 kernel: audit: type=1334 audit(1719332837.716:456): prog-id=107 op=LOAD Jun 25 16:27:17.720000 audit: BPF prog-id=108 op=LOAD Jun 25 16:27:17.724262 kernel: audit: type=1334 audit(1719332837.720:457): prog-id=108 op=LOAD Jun 25 16:27:17.720000 audit[2619]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2421 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:17.731072 kernel: audit: type=1300 audit(1719332837.720:457): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2421 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:17.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343739373266316337643531326236663836623566346234343137 Jun 25 16:27:17.738097 kernel: audit: type=1327 audit(1719332837.720:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343739373266316337643531326236663836623566346234343137 Jun 25 16:27:17.723000 audit: BPF prog-id=109 op=LOAD Jun 25 16:27:17.741102 kernel: audit: type=1334 audit(1719332837.723:458): prog-id=109 op=LOAD Jun 25 16:27:17.723000 audit[2619]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2421 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:17.755123 kernel: audit: type=1300 audit(1719332837.723:458): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2421 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:17.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343739373266316337643531326236663836623566346234343137 Jun 25 16:27:17.764123 kernel: audit: type=1327 audit(1719332837.723:458): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343739373266316337643531326236663836623566346234343137 Jun 25 16:27:17.723000 audit: BPF prog-id=109 op=UNLOAD Jun 25 16:27:17.767078 kernel: audit: type=1334 audit(1719332837.723:459): prog-id=109 op=UNLOAD Jun 25 16:27:17.723000 audit: BPF prog-id=108 op=UNLOAD Jun 25 16:27:17.770157 kernel: audit: type=1334 audit(1719332837.723:460): prog-id=108 op=UNLOAD Jun 25 16:27:17.723000 audit: BPF prog-id=110 op=LOAD Jun 25 16:27:17.772136 kernel: audit: type=1334 audit(1719332837.723:461): prog-id=110 op=LOAD Jun 25 16:27:17.723000 audit[2619]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2421 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:17.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343739373266316337643531326236663836623566346234343137 Jun 25 16:27:17.783530 containerd[1277]: time="2024-06-25T16:27:17.783470525Z" level=info msg="StartContainer for \"c447972f1c7d512b6f86b5f4b44174d51c137ae01af14ce8a0b5cd91a8a0c2a9\" returns successfully" Jun 25 16:27:18.591145 kubelet[2274]: I0625 16:27:18.591067 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-n82z4" podStartSLOduration=1.920614653 podStartE2EDuration="5.591017549s" podCreationTimestamp="2024-06-25 16:27:13 +0000 UTC" firstStartedPulling="2024-06-25 16:27:13.930252416 +0000 UTC m=+16.806855647" lastFinishedPulling="2024-06-25 16:27:17.600655333 +0000 UTC m=+20.477258543" observedRunningTime="2024-06-25 16:27:18.590371824 +0000 UTC m=+21.466975056" watchObservedRunningTime="2024-06-25 16:27:18.591017549 +0000 UTC m=+21.467620839" Jun 25 16:27:21.131000 audit[2650]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:21.131000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcd33ba270 a2=0 a3=7ffcd33ba25c items=0 ppid=2449 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:21.131000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:21.133000 audit[2650]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:21.133000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd33ba270 a2=0 a3=0 items=0 ppid=2449 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:21.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:21.150000 audit[2652]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:21.150000 audit[2652]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff15376600 a2=0 a3=7fff153765ec items=0 ppid=2449 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:21.150000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:21.151000 audit[2652]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:21.151000 audit[2652]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff15376600 a2=0 a3=0 items=0 ppid=2449 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:21.151000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:21.305677 kubelet[2274]: I0625 16:27:21.305612 2274 topology_manager.go:215] "Topology Admit Handler" podUID="d0e59699-032d-4d36-9cb8-38f197351243" podNamespace="calico-system" podName="calico-typha-55c97dbb7-lpsnt" Jun 25 16:27:21.317451 systemd[1]: Created slice kubepods-besteffort-podd0e59699_032d_4d36_9cb8_38f197351243.slice - libcontainer container kubepods-besteffort-podd0e59699_032d_4d36_9cb8_38f197351243.slice. Jun 25 16:27:21.379884 kubelet[2274]: I0625 16:27:21.379789 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b92g\" (UniqueName: \"kubernetes.io/projected/d0e59699-032d-4d36-9cb8-38f197351243-kube-api-access-6b92g\") pod \"calico-typha-55c97dbb7-lpsnt\" (UID: \"d0e59699-032d-4d36-9cb8-38f197351243\") " pod="calico-system/calico-typha-55c97dbb7-lpsnt" Jun 25 16:27:21.380132 kubelet[2274]: I0625 16:27:21.379939 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e59699-032d-4d36-9cb8-38f197351243-tigera-ca-bundle\") pod \"calico-typha-55c97dbb7-lpsnt\" (UID: \"d0e59699-032d-4d36-9cb8-38f197351243\") " pod="calico-system/calico-typha-55c97dbb7-lpsnt" Jun 25 16:27:21.380132 kubelet[2274]: I0625 16:27:21.379979 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d0e59699-032d-4d36-9cb8-38f197351243-typha-certs\") pod \"calico-typha-55c97dbb7-lpsnt\" (UID: \"d0e59699-032d-4d36-9cb8-38f197351243\") " pod="calico-system/calico-typha-55c97dbb7-lpsnt" Jun 25 16:27:21.435993 kubelet[2274]: I0625 16:27:21.435936 2274 topology_manager.go:215] "Topology Admit Handler" podUID="63c256e4-0a3d-4f99-bc59-677b9764fce7" podNamespace="calico-system" podName="calico-node-rr8wv" Jun 25 16:27:21.444538 systemd[1]: Created slice kubepods-besteffort-pod63c256e4_0a3d_4f99_bc59_677b9764fce7.slice - libcontainer container kubepods-besteffort-pod63c256e4_0a3d_4f99_bc59_677b9764fce7.slice. Jun 25 16:27:21.558516 kubelet[2274]: I0625 16:27:21.558468 2274 topology_manager.go:215] "Topology Admit Handler" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" podNamespace="calico-system" podName="csi-node-driver-qrghb" Jun 25 16:27:21.558791 kubelet[2274]: E0625 16:27:21.558759 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrghb" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" Jun 25 16:27:21.582640 kubelet[2274]: I0625 16:27:21.582522 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/63c256e4-0a3d-4f99-bc59-677b9764fce7-cni-bin-dir\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.582640 kubelet[2274]: I0625 16:27:21.582591 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63c256e4-0a3d-4f99-bc59-677b9764fce7-var-lib-calico\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.582640 kubelet[2274]: I0625 16:27:21.582650 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/63c256e4-0a3d-4f99-bc59-677b9764fce7-cni-net-dir\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.582969 kubelet[2274]: I0625 16:27:21.582680 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtsv5\" (UniqueName: \"kubernetes.io/projected/63c256e4-0a3d-4f99-bc59-677b9764fce7-kube-api-access-jtsv5\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.582969 kubelet[2274]: I0625 16:27:21.582704 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/63c256e4-0a3d-4f99-bc59-677b9764fce7-var-run-calico\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.582969 kubelet[2274]: I0625 16:27:21.582750 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/63c256e4-0a3d-4f99-bc59-677b9764fce7-node-certs\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.582969 kubelet[2274]: I0625 16:27:21.582778 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/63c256e4-0a3d-4f99-bc59-677b9764fce7-cni-log-dir\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.582969 kubelet[2274]: I0625 16:27:21.582804 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63c256e4-0a3d-4f99-bc59-677b9764fce7-xtables-lock\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.583155 kubelet[2274]: I0625 16:27:21.582828 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/63c256e4-0a3d-4f99-bc59-677b9764fce7-flexvol-driver-host\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.583155 kubelet[2274]: I0625 16:27:21.582850 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63c256e4-0a3d-4f99-bc59-677b9764fce7-lib-modules\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.583155 kubelet[2274]: I0625 16:27:21.582876 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/63c256e4-0a3d-4f99-bc59-677b9764fce7-policysync\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.583155 kubelet[2274]: I0625 16:27:21.582922 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63c256e4-0a3d-4f99-bc59-677b9764fce7-tigera-ca-bundle\") pod \"calico-node-rr8wv\" (UID: \"63c256e4-0a3d-4f99-bc59-677b9764fce7\") " pod="calico-system/calico-node-rr8wv" Jun 25 16:27:21.623320 kubelet[2274]: E0625 16:27:21.623268 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:21.624766 containerd[1277]: time="2024-06-25T16:27:21.624706194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c97dbb7-lpsnt,Uid:d0e59699-032d-4d36-9cb8-38f197351243,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:21.675053 containerd[1277]: time="2024-06-25T16:27:21.674914152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:21.677415 containerd[1277]: time="2024-06-25T16:27:21.677218299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:21.677595 containerd[1277]: time="2024-06-25T16:27:21.677436375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:21.677595 containerd[1277]: time="2024-06-25T16:27:21.677476828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:21.683805 kubelet[2274]: I0625 16:27:21.683413 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/66a2358e-62b7-4455-bce2-ea313197d5cb-varrun\") pod \"csi-node-driver-qrghb\" (UID: \"66a2358e-62b7-4455-bce2-ea313197d5cb\") " pod="calico-system/csi-node-driver-qrghb" Jun 25 16:27:21.683805 kubelet[2274]: I0625 16:27:21.683509 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/66a2358e-62b7-4455-bce2-ea313197d5cb-socket-dir\") pod \"csi-node-driver-qrghb\" (UID: \"66a2358e-62b7-4455-bce2-ea313197d5cb\") " pod="calico-system/csi-node-driver-qrghb" Jun 25 16:27:21.683805 kubelet[2274]: I0625 16:27:21.683541 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/66a2358e-62b7-4455-bce2-ea313197d5cb-registration-dir\") pod \"csi-node-driver-qrghb\" (UID: \"66a2358e-62b7-4455-bce2-ea313197d5cb\") " pod="calico-system/csi-node-driver-qrghb" Jun 25 16:27:21.683805 kubelet[2274]: I0625 16:27:21.683599 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hcc6\" (UniqueName: \"kubernetes.io/projected/66a2358e-62b7-4455-bce2-ea313197d5cb-kube-api-access-4hcc6\") pod \"csi-node-driver-qrghb\" (UID: \"66a2358e-62b7-4455-bce2-ea313197d5cb\") " pod="calico-system/csi-node-driver-qrghb" Jun 25 16:27:21.683805 kubelet[2274]: I0625 16:27:21.683618 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66a2358e-62b7-4455-bce2-ea313197d5cb-kubelet-dir\") pod \"csi-node-driver-qrghb\" (UID: \"66a2358e-62b7-4455-bce2-ea313197d5cb\") " pod="calico-system/csi-node-driver-qrghb" Jun 25 16:27:21.726896 kubelet[2274]: E0625 16:27:21.720994 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.740151 kubelet[2274]: W0625 16:27:21.740102 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.740400 kubelet[2274]: E0625 16:27:21.740385 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.742744 kubelet[2274]: E0625 16:27:21.742702 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.742988 kubelet[2274]: W0625 16:27:21.742960 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.743171 kubelet[2274]: E0625 16:27:21.743149 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.743519 kubelet[2274]: E0625 16:27:21.743496 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.743519 kubelet[2274]: W0625 16:27:21.743513 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.743817 kubelet[2274]: E0625 16:27:21.743529 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.743817 kubelet[2274]: E0625 16:27:21.743723 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.743817 kubelet[2274]: W0625 16:27:21.743730 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.743817 kubelet[2274]: E0625 16:27:21.743739 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.744059 kubelet[2274]: E0625 16:27:21.743965 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.744059 kubelet[2274]: W0625 16:27:21.743973 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.744059 kubelet[2274]: E0625 16:27:21.743982 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.770961 systemd[1]: Started cri-containerd-84aac28ebeb7124a10526b0e85e0aff11d1da85ac9b772f069ffb087c72f7a55.scope - libcontainer container 84aac28ebeb7124a10526b0e85e0aff11d1da85ac9b772f069ffb087c72f7a55. Jun 25 16:27:21.774898 kubelet[2274]: E0625 16:27:21.774380 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.774898 kubelet[2274]: W0625 16:27:21.774404 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.774898 kubelet[2274]: E0625 16:27:21.774427 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.784325 kubelet[2274]: E0625 16:27:21.784289 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.784325 kubelet[2274]: W0625 16:27:21.784315 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.784526 kubelet[2274]: E0625 16:27:21.784345 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.784687 kubelet[2274]: E0625 16:27:21.784671 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.784687 kubelet[2274]: W0625 16:27:21.784685 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.784759 kubelet[2274]: E0625 16:27:21.784703 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.785041 kubelet[2274]: E0625 16:27:21.785012 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.785041 kubelet[2274]: W0625 16:27:21.785028 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.785151 kubelet[2274]: E0625 16:27:21.785056 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.785318 kubelet[2274]: E0625 16:27:21.785300 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.785363 kubelet[2274]: W0625 16:27:21.785321 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.785363 kubelet[2274]: E0625 16:27:21.785338 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.785706 kubelet[2274]: E0625 16:27:21.785687 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.785755 kubelet[2274]: W0625 16:27:21.785704 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.785755 kubelet[2274]: E0625 16:27:21.785730 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.785966 kubelet[2274]: E0625 16:27:21.785951 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.785966 kubelet[2274]: W0625 16:27:21.785963 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.786089 kubelet[2274]: E0625 16:27:21.786068 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.786153 kubelet[2274]: E0625 16:27:21.786141 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.786188 kubelet[2274]: W0625 16:27:21.786152 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.786247 kubelet[2274]: E0625 16:27:21.786236 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.786325 kubelet[2274]: E0625 16:27:21.786310 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.786325 kubelet[2274]: W0625 16:27:21.786319 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.786398 kubelet[2274]: E0625 16:27:21.786333 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.786555 kubelet[2274]: E0625 16:27:21.786530 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.786599 kubelet[2274]: W0625 16:27:21.786554 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.786660 kubelet[2274]: E0625 16:27:21.786645 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.786854 kubelet[2274]: E0625 16:27:21.786836 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.786895 kubelet[2274]: W0625 16:27:21.786854 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.786977 kubelet[2274]: E0625 16:27:21.786959 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.787135 kubelet[2274]: E0625 16:27:21.787123 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.787135 kubelet[2274]: W0625 16:27:21.787134 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.787247 kubelet[2274]: E0625 16:27:21.787227 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.787340 kubelet[2274]: E0625 16:27:21.787327 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.787340 kubelet[2274]: W0625 16:27:21.787339 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.787408 kubelet[2274]: E0625 16:27:21.787353 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.787602 kubelet[2274]: E0625 16:27:21.787585 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.787602 kubelet[2274]: W0625 16:27:21.787600 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.787701 kubelet[2274]: E0625 16:27:21.787687 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.787801 kubelet[2274]: E0625 16:27:21.787785 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.787801 kubelet[2274]: W0625 16:27:21.787796 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.787978 kubelet[2274]: E0625 16:27:21.787955 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.788111 kubelet[2274]: E0625 16:27:21.787966 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.788228 kubelet[2274]: W0625 16:27:21.788204 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.788653 kubelet[2274]: E0625 16:27:21.788630 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.788791 kubelet[2274]: W0625 16:27:21.788770 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.791257 kubelet[2274]: E0625 16:27:21.791211 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.791404 kubelet[2274]: E0625 16:27:21.791300 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.791666 kubelet[2274]: E0625 16:27:21.791641 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.791805 kubelet[2274]: W0625 16:27:21.791779 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.791964 kubelet[2274]: E0625 16:27:21.791938 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.792318 kubelet[2274]: E0625 16:27:21.792290 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.792464 kubelet[2274]: W0625 16:27:21.792441 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.792713 kubelet[2274]: E0625 16:27:21.792677 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.792962 kubelet[2274]: E0625 16:27:21.792945 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.793097 kubelet[2274]: W0625 16:27:21.793082 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.793264 kubelet[2274]: E0625 16:27:21.793218 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.793679 kubelet[2274]: E0625 16:27:21.793658 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.793978 kubelet[2274]: W0625 16:27:21.793950 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.794183 kubelet[2274]: E0625 16:27:21.794157 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.794485 kubelet[2274]: E0625 16:27:21.794465 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.794571 kubelet[2274]: W0625 16:27:21.794559 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.796074 kubelet[2274]: E0625 16:27:21.794685 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.796251 kubelet[2274]: E0625 16:27:21.796236 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.796316 kubelet[2274]: W0625 16:27:21.796305 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.796591 kubelet[2274]: E0625 16:27:21.796581 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.796671 kubelet[2274]: W0625 16:27:21.796661 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.796728 kubelet[2274]: E0625 16:27:21.796717 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.796800 kubelet[2274]: E0625 16:27:21.796791 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.798064 kubelet[2274]: E0625 16:27:21.797133 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.798064 kubelet[2274]: W0625 16:27:21.797151 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.798064 kubelet[2274]: E0625 16:27:21.797172 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.798064 kubelet[2274]: E0625 16:27:21.797458 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.798064 kubelet[2274]: W0625 16:27:21.797470 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.798064 kubelet[2274]: E0625 16:27:21.797483 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.826638 kubelet[2274]: E0625 16:27:21.826598 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:21.826638 kubelet[2274]: W0625 16:27:21.826624 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:21.826638 kubelet[2274]: E0625 16:27:21.826646 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:21.873000 audit: BPF prog-id=111 op=LOAD Jun 25 16:27:21.873000 audit: BPF prog-id=112 op=LOAD Jun 25 16:27:21.873000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2662 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:21.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834616163323865626562373132346131303532366230653835653061 Jun 25 16:27:21.874000 audit: BPF prog-id=113 op=LOAD Jun 25 16:27:21.874000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2662 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:21.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834616163323865626562373132346131303532366230653835653061 Jun 25 16:27:21.874000 audit: BPF prog-id=113 op=UNLOAD Jun 25 16:27:21.874000 audit: BPF prog-id=112 op=UNLOAD Jun 25 16:27:21.875000 audit: BPF prog-id=114 op=LOAD Jun 25 16:27:21.875000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2662 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:21.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834616163323865626562373132346131303532366230653835653061 Jun 25 16:27:21.919589 containerd[1277]: time="2024-06-25T16:27:21.919534728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c97dbb7-lpsnt,Uid:d0e59699-032d-4d36-9cb8-38f197351243,Namespace:calico-system,Attempt:0,} returns sandbox id \"84aac28ebeb7124a10526b0e85e0aff11d1da85ac9b772f069ffb087c72f7a55\"" Jun 25 16:27:21.920382 kubelet[2274]: E0625 16:27:21.920333 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:21.924216 containerd[1277]: time="2024-06-25T16:27:21.924175537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:27:22.048530 kubelet[2274]: E0625 16:27:22.048237 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:22.057771 containerd[1277]: time="2024-06-25T16:27:22.057721308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rr8wv,Uid:63c256e4-0a3d-4f99-bc59-677b9764fce7,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:22.091690 containerd[1277]: time="2024-06-25T16:27:22.090850856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:22.091690 containerd[1277]: time="2024-06-25T16:27:22.090928537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:22.091690 containerd[1277]: time="2024-06-25T16:27:22.090947804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:22.091690 containerd[1277]: time="2024-06-25T16:27:22.090978102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:22.119301 systemd[1]: Started cri-containerd-4be1d503b557c9377f3d75b137b7131d967d3417f6ebf17ece368fff2fc3d776.scope - libcontainer container 4be1d503b557c9377f3d75b137b7131d967d3417f6ebf17ece368fff2fc3d776. Jun 25 16:27:22.145000 audit: BPF prog-id=115 op=LOAD Jun 25 16:27:22.146000 audit: BPF prog-id=116 op=LOAD Jun 25 16:27:22.146000 audit[2748]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2738 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:22.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653164353033623535376339333737663364373562313337623731 Jun 25 16:27:22.146000 audit: BPF prog-id=117 op=LOAD Jun 25 16:27:22.146000 audit[2748]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2738 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:22.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653164353033623535376339333737663364373562313337623731 Jun 25 16:27:22.146000 audit: BPF prog-id=117 op=UNLOAD Jun 25 16:27:22.146000 audit: BPF prog-id=116 op=UNLOAD Jun 25 16:27:22.146000 audit: BPF prog-id=118 op=LOAD Jun 25 16:27:22.146000 audit[2748]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2738 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:22.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462653164353033623535376339333737663364373562313337623731 Jun 25 16:27:22.168953 containerd[1277]: time="2024-06-25T16:27:22.168893885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rr8wv,Uid:63c256e4-0a3d-4f99-bc59-677b9764fce7,Namespace:calico-system,Attempt:0,} returns sandbox id \"4be1d503b557c9377f3d75b137b7131d967d3417f6ebf17ece368fff2fc3d776\"" Jun 25 16:27:22.170364 kubelet[2274]: E0625 16:27:22.170320 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:22.175000 audit[2771]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:22.175000 audit[2771]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcfa0d9a90 a2=0 a3=7ffcfa0d9a7c items=0 ppid=2449 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:22.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:22.176000 audit[2771]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:22.176000 audit[2771]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcfa0d9a90 a2=0 a3=0 items=0 ppid=2449 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:22.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:23.422120 kubelet[2274]: E0625 16:27:23.422009 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrghb" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" Jun 25 16:27:24.872529 containerd[1277]: time="2024-06-25T16:27:24.872461793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:24.879857 containerd[1277]: time="2024-06-25T16:27:24.879774034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:27:24.882797 containerd[1277]: time="2024-06-25T16:27:24.882726133Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:24.887611 containerd[1277]: time="2024-06-25T16:27:24.887552806Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:24.891747 containerd[1277]: time="2024-06-25T16:27:24.891673414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:24.894468 containerd[1277]: time="2024-06-25T16:27:24.894378481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.970154714s" Jun 25 16:27:24.894468 containerd[1277]: time="2024-06-25T16:27:24.894461183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:27:24.899965 containerd[1277]: time="2024-06-25T16:27:24.898967980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:27:24.945747 containerd[1277]: time="2024-06-25T16:27:24.945647779Z" level=info msg="CreateContainer within sandbox \"84aac28ebeb7124a10526b0e85e0aff11d1da85ac9b772f069ffb087c72f7a55\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:27:24.993334 containerd[1277]: time="2024-06-25T16:27:24.993256730Z" level=info msg="CreateContainer within sandbox \"84aac28ebeb7124a10526b0e85e0aff11d1da85ac9b772f069ffb087c72f7a55\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0febe712e77b1ec01baeae12047f525289fe9306d855242a6fe7ff08ea8fd004\"" Jun 25 16:27:24.994748 containerd[1277]: time="2024-06-25T16:27:24.994686977Z" level=info msg="StartContainer for \"0febe712e77b1ec01baeae12047f525289fe9306d855242a6fe7ff08ea8fd004\"" Jun 25 16:27:25.064381 systemd[1]: Started cri-containerd-0febe712e77b1ec01baeae12047f525289fe9306d855242a6fe7ff08ea8fd004.scope - libcontainer container 0febe712e77b1ec01baeae12047f525289fe9306d855242a6fe7ff08ea8fd004. Jun 25 16:27:25.095000 audit: BPF prog-id=119 op=LOAD Jun 25 16:27:25.098046 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:27:25.098209 kernel: audit: type=1334 audit(1719332845.095:480): prog-id=119 op=LOAD Jun 25 16:27:25.102000 audit: BPF prog-id=120 op=LOAD Jun 25 16:27:25.167082 kernel: audit: type=1334 audit(1719332845.102:481): prog-id=120 op=LOAD Jun 25 16:27:25.102000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2662 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:25.174072 kernel: audit: type=1300 audit(1719332845.102:481): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2662 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:25.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656265373132653737623165633031626165616531323034376635 Jun 25 16:27:25.185088 kernel: audit: type=1327 audit(1719332845.102:481): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656265373132653737623165633031626165616531323034376635 Jun 25 16:27:25.102000 audit: BPF prog-id=121 op=LOAD Jun 25 16:27:25.188074 kernel: audit: type=1334 audit(1719332845.102:482): prog-id=121 op=LOAD Jun 25 16:27:25.102000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2662 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:25.195086 kernel: audit: type=1300 audit(1719332845.102:482): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2662 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:25.195275 kernel: audit: type=1327 audit(1719332845.102:482): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656265373132653737623165633031626165616531323034376635 Jun 25 16:27:25.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656265373132653737623165633031626165616531323034376635 Jun 25 16:27:25.102000 audit: BPF prog-id=121 op=UNLOAD Jun 25 16:27:25.205088 kernel: audit: type=1334 audit(1719332845.102:483): prog-id=121 op=UNLOAD Jun 25 16:27:25.205281 kernel: audit: type=1334 audit(1719332845.102:484): prog-id=120 op=UNLOAD Jun 25 16:27:25.102000 audit: BPF prog-id=120 op=UNLOAD Jun 25 16:27:25.102000 audit: BPF prog-id=122 op=LOAD Jun 25 16:27:25.102000 audit[2785]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2662 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:25.211358 kernel: audit: type=1334 audit(1719332845.102:485): prog-id=122 op=LOAD Jun 25 16:27:25.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066656265373132653737623165633031626165616531323034376635 Jun 25 16:27:25.276008 containerd[1277]: time="2024-06-25T16:27:25.275941661Z" level=info msg="StartContainer for \"0febe712e77b1ec01baeae12047f525289fe9306d855242a6fe7ff08ea8fd004\" returns successfully" Jun 25 16:27:25.413424 kubelet[2274]: E0625 16:27:25.413372 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrghb" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" Jun 25 16:27:25.606940 kubelet[2274]: E0625 16:27:25.605685 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:25.668103 kubelet[2274]: E0625 16:27:25.668063 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.668103 kubelet[2274]: W0625 16:27:25.668091 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.668461 kubelet[2274]: E0625 16:27:25.668117 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.668929 kubelet[2274]: E0625 16:27:25.668901 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.668929 kubelet[2274]: W0625 16:27:25.668921 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.669151 kubelet[2274]: E0625 16:27:25.668941 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.669468 kubelet[2274]: E0625 16:27:25.669447 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.669468 kubelet[2274]: W0625 16:27:25.669462 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.669827 kubelet[2274]: E0625 16:27:25.669477 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.669827 kubelet[2274]: E0625 16:27:25.669753 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.669827 kubelet[2274]: W0625 16:27:25.669764 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.669827 kubelet[2274]: E0625 16:27:25.669776 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.670584 kubelet[2274]: E0625 16:27:25.670558 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.670584 kubelet[2274]: W0625 16:27:25.670574 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.670584 kubelet[2274]: E0625 16:27:25.670587 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.670840 kubelet[2274]: E0625 16:27:25.670828 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.670840 kubelet[2274]: W0625 16:27:25.670838 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.670976 kubelet[2274]: E0625 16:27:25.670850 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.671380 kubelet[2274]: E0625 16:27:25.671352 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.671380 kubelet[2274]: W0625 16:27:25.671372 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.671530 kubelet[2274]: E0625 16:27:25.671386 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.671750 kubelet[2274]: E0625 16:27:25.671728 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.671750 kubelet[2274]: W0625 16:27:25.671747 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.671876 kubelet[2274]: E0625 16:27:25.671765 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.672223 kubelet[2274]: E0625 16:27:25.672202 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.672223 kubelet[2274]: W0625 16:27:25.672218 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.672415 kubelet[2274]: E0625 16:27:25.672239 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.672510 kubelet[2274]: E0625 16:27:25.672490 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.672510 kubelet[2274]: W0625 16:27:25.672505 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.672633 kubelet[2274]: E0625 16:27:25.672517 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.672879 kubelet[2274]: E0625 16:27:25.672863 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.672879 kubelet[2274]: W0625 16:27:25.672876 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.673126 kubelet[2274]: E0625 16:27:25.672888 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.673207 kubelet[2274]: E0625 16:27:25.673161 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.673207 kubelet[2274]: W0625 16:27:25.673173 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.673207 kubelet[2274]: E0625 16:27:25.673186 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.674315 kubelet[2274]: E0625 16:27:25.674288 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.674487 kubelet[2274]: W0625 16:27:25.674466 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.674629 kubelet[2274]: E0625 16:27:25.674607 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.675162 kubelet[2274]: E0625 16:27:25.675134 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.675348 kubelet[2274]: W0625 16:27:25.675333 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.675476 kubelet[2274]: E0625 16:27:25.675456 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.675944 kubelet[2274]: E0625 16:27:25.675906 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.676222 kubelet[2274]: W0625 16:27:25.676199 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.676348 kubelet[2274]: E0625 16:27:25.676328 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.717324 kubelet[2274]: E0625 16:27:25.717288 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.717571 kubelet[2274]: W0625 16:27:25.717545 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.717731 kubelet[2274]: E0625 16:27:25.717713 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.718648 kubelet[2274]: E0625 16:27:25.718621 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.718847 kubelet[2274]: W0625 16:27:25.718830 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.718964 kubelet[2274]: E0625 16:27:25.718951 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.719471 kubelet[2274]: E0625 16:27:25.719450 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.719607 kubelet[2274]: W0625 16:27:25.719593 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.719714 kubelet[2274]: E0625 16:27:25.719698 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.720111 kubelet[2274]: E0625 16:27:25.720087 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.720111 kubelet[2274]: W0625 16:27:25.720107 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.720342 kubelet[2274]: E0625 16:27:25.720131 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.720628 kubelet[2274]: E0625 16:27:25.720588 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.720628 kubelet[2274]: W0625 16:27:25.720604 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.720857 kubelet[2274]: E0625 16:27:25.720831 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.721320 kubelet[2274]: E0625 16:27:25.721300 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.721320 kubelet[2274]: W0625 16:27:25.721315 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.721546 kubelet[2274]: E0625 16:27:25.721526 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.722479 kubelet[2274]: E0625 16:27:25.722452 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.722479 kubelet[2274]: W0625 16:27:25.722476 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.722685 kubelet[2274]: E0625 16:27:25.722665 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.722988 kubelet[2274]: E0625 16:27:25.722966 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.722988 kubelet[2274]: W0625 16:27:25.722986 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.723244 kubelet[2274]: E0625 16:27:25.723225 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.723549 kubelet[2274]: E0625 16:27:25.723529 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.723631 kubelet[2274]: W0625 16:27:25.723550 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.723723 kubelet[2274]: E0625 16:27:25.723706 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.723977 kubelet[2274]: E0625 16:27:25.723956 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.723977 kubelet[2274]: W0625 16:27:25.723975 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.724159 kubelet[2274]: E0625 16:27:25.724009 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.724450 kubelet[2274]: E0625 16:27:25.724428 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.724450 kubelet[2274]: W0625 16:27:25.724448 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.724609 kubelet[2274]: E0625 16:27:25.724472 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.724977 kubelet[2274]: E0625 16:27:25.724961 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.725149 kubelet[2274]: W0625 16:27:25.725134 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.725260 kubelet[2274]: E0625 16:27:25.725247 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.725587 kubelet[2274]: E0625 16:27:25.725572 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.725707 kubelet[2274]: W0625 16:27:25.725689 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.725824 kubelet[2274]: E0625 16:27:25.725802 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.726312 kubelet[2274]: E0625 16:27:25.726295 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.726449 kubelet[2274]: W0625 16:27:25.726431 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.726582 kubelet[2274]: E0625 16:27:25.726562 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.726979 kubelet[2274]: E0625 16:27:25.726962 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.727124 kubelet[2274]: W0625 16:27:25.727102 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.727264 kubelet[2274]: E0625 16:27:25.727246 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.727872 kubelet[2274]: E0625 16:27:25.727853 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.727999 kubelet[2274]: W0625 16:27:25.727983 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.728223 kubelet[2274]: E0625 16:27:25.728203 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.729136 kubelet[2274]: E0625 16:27:25.729117 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.729267 kubelet[2274]: W0625 16:27:25.729251 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.729382 kubelet[2274]: E0625 16:27:25.729365 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:25.729824 kubelet[2274]: E0625 16:27:25.729797 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:25.729986 kubelet[2274]: W0625 16:27:25.729964 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:25.730163 kubelet[2274]: E0625 16:27:25.730144 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.506995 containerd[1277]: time="2024-06-25T16:27:26.506910323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:26.510340 containerd[1277]: time="2024-06-25T16:27:26.510252594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:27:26.512349 containerd[1277]: time="2024-06-25T16:27:26.512295668Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:26.515950 containerd[1277]: time="2024-06-25T16:27:26.515899400Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:26.522049 containerd[1277]: time="2024-06-25T16:27:26.521963229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:26.524449 containerd[1277]: time="2024-06-25T16:27:26.524385583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.625312855s" Jun 25 16:27:26.524714 containerd[1277]: time="2024-06-25T16:27:26.524682107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:27:26.528228 containerd[1277]: time="2024-06-25T16:27:26.528178874Z" level=info msg="CreateContainer within sandbox \"4be1d503b557c9377f3d75b137b7131d967d3417f6ebf17ece368fff2fc3d776\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:27:26.559544 containerd[1277]: time="2024-06-25T16:27:26.559450504Z" level=info msg="CreateContainer within sandbox \"4be1d503b557c9377f3d75b137b7131d967d3417f6ebf17ece368fff2fc3d776\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb\"" Jun 25 16:27:26.560860 containerd[1277]: time="2024-06-25T16:27:26.560805460Z" level=info msg="StartContainer for \"5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb\"" Jun 25 16:27:26.607373 systemd[1]: Started cri-containerd-5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb.scope - libcontainer container 5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb. Jun 25 16:27:26.621823 systemd[1]: run-containerd-runc-k8s.io-5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb-runc.Y4Toiu.mount: Deactivated successfully. Jun 25 16:27:26.628793 kubelet[2274]: I0625 16:27:26.628714 2274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:27:26.631012 kubelet[2274]: E0625 16:27:26.629611 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:26.652000 audit: BPF prog-id=123 op=LOAD Jun 25 16:27:26.652000 audit[2861]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2738 pid=2861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564303035333933363061663337346364653764343862636239356666 Jun 25 16:27:26.652000 audit: BPF prog-id=124 op=LOAD Jun 25 16:27:26.652000 audit[2861]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2738 pid=2861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564303035333933363061663337346364653764343862636239356666 Jun 25 16:27:26.652000 audit: BPF prog-id=124 op=UNLOAD Jun 25 16:27:26.652000 audit: BPF prog-id=123 op=UNLOAD Jun 25 16:27:26.652000 audit: BPF prog-id=125 op=LOAD Jun 25 16:27:26.652000 audit[2861]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2738 pid=2861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:26.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564303035333933363061663337346364653764343862636239356666 Jun 25 16:27:26.674394 containerd[1277]: time="2024-06-25T16:27:26.674294878Z" level=info msg="StartContainer for \"5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb\" returns successfully" Jun 25 16:27:26.685097 kubelet[2274]: E0625 16:27:26.685061 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.685340 kubelet[2274]: W0625 16:27:26.685320 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.685485 kubelet[2274]: E0625 16:27:26.685468 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.685941 kubelet[2274]: E0625 16:27:26.685915 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.686179 kubelet[2274]: W0625 16:27:26.686158 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.686383 kubelet[2274]: E0625 16:27:26.686328 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.686704 kubelet[2274]: E0625 16:27:26.686686 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.686804 kubelet[2274]: W0625 16:27:26.686793 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.686904 kubelet[2274]: E0625 16:27:26.686893 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.687273 kubelet[2274]: E0625 16:27:26.687256 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.687431 kubelet[2274]: W0625 16:27:26.687419 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.687537 kubelet[2274]: E0625 16:27:26.687524 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.687962 kubelet[2274]: E0625 16:27:26.687946 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.688110 kubelet[2274]: W0625 16:27:26.688098 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.688316 kubelet[2274]: E0625 16:27:26.688302 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.688747 kubelet[2274]: E0625 16:27:26.688731 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.688895 kubelet[2274]: W0625 16:27:26.688878 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.689065 kubelet[2274]: E0625 16:27:26.689051 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.689428 kubelet[2274]: E0625 16:27:26.689412 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.689561 kubelet[2274]: W0625 16:27:26.689547 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.689709 kubelet[2274]: E0625 16:27:26.689696 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.690190 kubelet[2274]: E0625 16:27:26.690164 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.690345 kubelet[2274]: W0625 16:27:26.690328 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.690474 kubelet[2274]: E0625 16:27:26.690460 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.692792 kubelet[2274]: E0625 16:27:26.692767 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.692989 kubelet[2274]: W0625 16:27:26.692966 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.693146 kubelet[2274]: E0625 16:27:26.693120 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.693578 kubelet[2274]: E0625 16:27:26.693558 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.693727 kubelet[2274]: W0625 16:27:26.693707 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.693855 kubelet[2274]: E0625 16:27:26.693838 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.694258 kubelet[2274]: E0625 16:27:26.694238 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.694379 kubelet[2274]: W0625 16:27:26.694364 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.694491 kubelet[2274]: E0625 16:27:26.694466 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.694858 kubelet[2274]: E0625 16:27:26.694840 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.694974 kubelet[2274]: W0625 16:27:26.694955 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.695108 kubelet[2274]: E0625 16:27:26.695089 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.695481 kubelet[2274]: E0625 16:27:26.695466 2274 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:27:26.695641 kubelet[2274]: W0625 16:27:26.695626 2274 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:27:26.695755 kubelet[2274]: E0625 16:27:26.695738 2274 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:27:26.696810 systemd[1]: cri-containerd-5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb.scope: Deactivated successfully. Jun 25 16:27:26.701000 audit: BPF prog-id=125 op=UNLOAD Jun 25 16:27:26.810536 containerd[1277]: time="2024-06-25T16:27:26.810346348Z" level=info msg="shim disconnected" id=5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb namespace=k8s.io Jun 25 16:27:26.810536 containerd[1277]: time="2024-06-25T16:27:26.810425223Z" level=warning msg="cleaning up after shim disconnected" id=5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb namespace=k8s.io Jun 25 16:27:26.810536 containerd[1277]: time="2024-06-25T16:27:26.810437397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:27:26.925772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d00539360af374cde7d48bcb95ffcb8227537ed5483a8b9d7fbb016075041cb-rootfs.mount: Deactivated successfully. Jun 25 16:27:27.414413 kubelet[2274]: E0625 16:27:27.414198 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrghb" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" Jun 25 16:27:27.634179 kubelet[2274]: E0625 16:27:27.634145 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:27.636339 containerd[1277]: time="2024-06-25T16:27:27.636275792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:27:27.672102 kubelet[2274]: I0625 16:27:27.669797 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55c97dbb7-lpsnt" podStartSLOduration=3.696102544 podStartE2EDuration="6.669771101s" podCreationTimestamp="2024-06-25 16:27:21 +0000 UTC" firstStartedPulling="2024-06-25 16:27:21.923749463 +0000 UTC m=+24.800352685" lastFinishedPulling="2024-06-25 16:27:24.897417986 +0000 UTC m=+27.774021242" observedRunningTime="2024-06-25 16:27:25.631164457 +0000 UTC m=+28.507767691" watchObservedRunningTime="2024-06-25 16:27:27.669771101 +0000 UTC m=+30.546374335" Jun 25 16:27:29.413664 kubelet[2274]: E0625 16:27:29.413596 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrghb" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" Jun 25 16:27:31.413484 kubelet[2274]: E0625 16:27:31.413389 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrghb" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" Jun 25 16:27:32.448479 containerd[1277]: time="2024-06-25T16:27:32.448387127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.450522 containerd[1277]: time="2024-06-25T16:27:32.450441572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:27:32.450886 containerd[1277]: time="2024-06-25T16:27:32.450847486Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.454401 containerd[1277]: time="2024-06-25T16:27:32.454335840Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.457142 containerd[1277]: time="2024-06-25T16:27:32.457087327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:32.458819 containerd[1277]: time="2024-06-25T16:27:32.458754915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.822413417s" Jun 25 16:27:32.459324 containerd[1277]: time="2024-06-25T16:27:32.459279082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:27:32.465901 containerd[1277]: time="2024-06-25T16:27:32.465837056Z" level=info msg="CreateContainer within sandbox \"4be1d503b557c9377f3d75b137b7131d967d3417f6ebf17ece368fff2fc3d776\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:27:32.534744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577915197.mount: Deactivated successfully. Jun 25 16:27:32.547200 containerd[1277]: time="2024-06-25T16:27:32.546892577Z" level=info msg="CreateContainer within sandbox \"4be1d503b557c9377f3d75b137b7131d967d3417f6ebf17ece368fff2fc3d776\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4\"" Jun 25 16:27:32.548803 containerd[1277]: time="2024-06-25T16:27:32.547874810Z" level=info msg="StartContainer for \"e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4\"" Jun 25 16:27:32.773513 systemd[1]: run-containerd-runc-k8s.io-e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4-runc.lhZEpX.mount: Deactivated successfully. Jun 25 16:27:32.789436 systemd[1]: Started cri-containerd-e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4.scope - libcontainer container e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4. Jun 25 16:27:32.823564 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 16:27:32.823800 kernel: audit: type=1334 audit(1719332852.817:492): prog-id=126 op=LOAD Jun 25 16:27:32.817000 audit: BPF prog-id=126 op=LOAD Jun 25 16:27:32.832271 kernel: audit: type=1300 audit(1719332852.817:492): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2738 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.817000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2738 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537393231643762346531353163363235303531393330326639633665 Jun 25 16:27:32.840063 kernel: audit: type=1327 audit(1719332852.817:492): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537393231643762346531353163363235303531393330326639633665 Jun 25 16:27:32.823000 audit: BPF prog-id=127 op=LOAD Jun 25 16:27:32.845082 kernel: audit: type=1334 audit(1719332852.823:493): prog-id=127 op=LOAD Jun 25 16:27:32.823000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2738 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.853120 kernel: audit: type=1300 audit(1719332852.823:493): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2738 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537393231643762346531353163363235303531393330326639633665 Jun 25 16:27:32.861086 kernel: audit: type=1327 audit(1719332852.823:493): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537393231643762346531353163363235303531393330326639633665 Jun 25 16:27:32.823000 audit: BPF prog-id=127 op=UNLOAD Jun 25 16:27:32.870066 kernel: audit: type=1334 audit(1719332852.823:494): prog-id=127 op=UNLOAD Jun 25 16:27:32.823000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:27:32.877315 kernel: audit: type=1334 audit(1719332852.823:495): prog-id=126 op=UNLOAD Jun 25 16:27:32.880706 kernel: audit: type=1334 audit(1719332852.823:496): prog-id=128 op=LOAD Jun 25 16:27:32.880840 kernel: audit: type=1300 audit(1719332852.823:496): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2738 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.823000 audit: BPF prog-id=128 op=LOAD Jun 25 16:27:32.823000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2738 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:32.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537393231643762346531353163363235303531393330326639633665 Jun 25 16:27:32.892675 containerd[1277]: time="2024-06-25T16:27:32.892594779Z" level=info msg="StartContainer for \"e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4\" returns successfully" Jun 25 16:27:33.411852 kubelet[2274]: E0625 16:27:33.411770 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrghb" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" Jun 25 16:27:33.667364 systemd[1]: cri-containerd-e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4.scope: Deactivated successfully. Jun 25 16:27:33.670000 audit: BPF prog-id=128 op=UNLOAD Jun 25 16:27:33.687065 kubelet[2274]: E0625 16:27:33.686934 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:33.741512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4-rootfs.mount: Deactivated successfully. Jun 25 16:27:33.754519 containerd[1277]: time="2024-06-25T16:27:33.754425591Z" level=info msg="shim disconnected" id=e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4 namespace=k8s.io Jun 25 16:27:33.755501 containerd[1277]: time="2024-06-25T16:27:33.755455835Z" level=warning msg="cleaning up after shim disconnected" id=e7921d7b4e151c6250519302f9c6ecd5cb863c1fbc8b62eaa14cf47ac3d7efe4 namespace=k8s.io Jun 25 16:27:33.755663 containerd[1277]: time="2024-06-25T16:27:33.755643366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:27:33.760317 kubelet[2274]: I0625 16:27:33.760079 2274 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 16:27:33.811000 kubelet[2274]: I0625 16:27:33.809985 2274 topology_manager.go:215] "Topology Admit Handler" podUID="fd5671b3-3dd7-4434-840a-28ff60f28b4d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ctxnk" Jun 25 16:27:33.822948 systemd[1]: Created slice kubepods-burstable-podfd5671b3_3dd7_4434_840a_28ff60f28b4d.slice - libcontainer container kubepods-burstable-podfd5671b3_3dd7_4434_840a_28ff60f28b4d.slice. Jun 25 16:27:33.836164 kubelet[2274]: I0625 16:27:33.836103 2274 topology_manager.go:215] "Topology Admit Handler" podUID="311c726d-a38f-4e12-bc55-98e4f6f1ab2a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8vxgt" Jun 25 16:27:33.840093 kubelet[2274]: I0625 16:27:33.840020 2274 topology_manager.go:215] "Topology Admit Handler" podUID="9e379c3e-11d3-4f6c-94e5-9fc57783d470" podNamespace="calico-system" podName="calico-kube-controllers-d96847fc9-dl7wj" Jun 25 16:27:33.848752 systemd[1]: Created slice kubepods-burstable-pod311c726d_a38f_4e12_bc55_98e4f6f1ab2a.slice - libcontainer container kubepods-burstable-pod311c726d_a38f_4e12_bc55_98e4f6f1ab2a.slice. Jun 25 16:27:33.866410 systemd[1]: Created slice kubepods-besteffort-pod9e379c3e_11d3_4f6c_94e5_9fc57783d470.slice - libcontainer container kubepods-besteffort-pod9e379c3e_11d3_4f6c_94e5_9fc57783d470.slice. Jun 25 16:27:33.912653 kubelet[2274]: I0625 16:27:33.912593 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e379c3e-11d3-4f6c-94e5-9fc57783d470-tigera-ca-bundle\") pod \"calico-kube-controllers-d96847fc9-dl7wj\" (UID: \"9e379c3e-11d3-4f6c-94e5-9fc57783d470\") " pod="calico-system/calico-kube-controllers-d96847fc9-dl7wj" Jun 25 16:27:33.913126 kubelet[2274]: I0625 16:27:33.913012 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55bdx\" (UniqueName: \"kubernetes.io/projected/311c726d-a38f-4e12-bc55-98e4f6f1ab2a-kube-api-access-55bdx\") pod \"coredns-7db6d8ff4d-8vxgt\" (UID: \"311c726d-a38f-4e12-bc55-98e4f6f1ab2a\") " pod="kube-system/coredns-7db6d8ff4d-8vxgt" Jun 25 16:27:33.913400 kubelet[2274]: I0625 16:27:33.913370 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd5671b3-3dd7-4434-840a-28ff60f28b4d-config-volume\") pod \"coredns-7db6d8ff4d-ctxnk\" (UID: \"fd5671b3-3dd7-4434-840a-28ff60f28b4d\") " pod="kube-system/coredns-7db6d8ff4d-ctxnk" Jun 25 16:27:33.913637 kubelet[2274]: I0625 16:27:33.913597 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/311c726d-a38f-4e12-bc55-98e4f6f1ab2a-config-volume\") pod \"coredns-7db6d8ff4d-8vxgt\" (UID: \"311c726d-a38f-4e12-bc55-98e4f6f1ab2a\") " pod="kube-system/coredns-7db6d8ff4d-8vxgt" Jun 25 16:27:33.913863 kubelet[2274]: I0625 16:27:33.913826 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hrh5\" (UniqueName: \"kubernetes.io/projected/fd5671b3-3dd7-4434-840a-28ff60f28b4d-kube-api-access-4hrh5\") pod \"coredns-7db6d8ff4d-ctxnk\" (UID: \"fd5671b3-3dd7-4434-840a-28ff60f28b4d\") " pod="kube-system/coredns-7db6d8ff4d-ctxnk" Jun 25 16:27:33.914649 kubelet[2274]: I0625 16:27:33.914124 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scsb6\" (UniqueName: \"kubernetes.io/projected/9e379c3e-11d3-4f6c-94e5-9fc57783d470-kube-api-access-scsb6\") pod \"calico-kube-controllers-d96847fc9-dl7wj\" (UID: \"9e379c3e-11d3-4f6c-94e5-9fc57783d470\") " pod="calico-system/calico-kube-controllers-d96847fc9-dl7wj" Jun 25 16:27:34.129597 kubelet[2274]: E0625 16:27:34.127986 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:34.130849 containerd[1277]: time="2024-06-25T16:27:34.130230686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ctxnk,Uid:fd5671b3-3dd7-4434-840a-28ff60f28b4d,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:34.164974 kubelet[2274]: E0625 16:27:34.162369 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:34.165222 containerd[1277]: time="2024-06-25T16:27:34.163218644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8vxgt,Uid:311c726d-a38f-4e12-bc55-98e4f6f1ab2a,Namespace:kube-system,Attempt:0,}" Jun 25 16:27:34.171872 containerd[1277]: time="2024-06-25T16:27:34.171800562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d96847fc9-dl7wj,Uid:9e379c3e-11d3-4f6c-94e5-9fc57783d470,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:34.433471 containerd[1277]: time="2024-06-25T16:27:34.433273738Z" level=error msg="Failed to destroy network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.434490 containerd[1277]: time="2024-06-25T16:27:34.434435365Z" level=error msg="encountered an error cleaning up failed sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.435043 containerd[1277]: time="2024-06-25T16:27:34.434995795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d96847fc9-dl7wj,Uid:9e379c3e-11d3-4f6c-94e5-9fc57783d470,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.435709 kubelet[2274]: E0625 16:27:34.435633 2274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.436214 kubelet[2274]: E0625 16:27:34.435734 2274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d96847fc9-dl7wj" Jun 25 16:27:34.436214 kubelet[2274]: E0625 16:27:34.435768 2274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d96847fc9-dl7wj" Jun 25 16:27:34.436214 kubelet[2274]: E0625 16:27:34.435829 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d96847fc9-dl7wj_calico-system(9e379c3e-11d3-4f6c-94e5-9fc57783d470)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d96847fc9-dl7wj_calico-system(9e379c3e-11d3-4f6c-94e5-9fc57783d470)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d96847fc9-dl7wj" podUID="9e379c3e-11d3-4f6c-94e5-9fc57783d470" Jun 25 16:27:34.444524 containerd[1277]: time="2024-06-25T16:27:34.444447085Z" level=error msg="Failed to destroy network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.445176 containerd[1277]: time="2024-06-25T16:27:34.445122020Z" level=error msg="encountered an error cleaning up failed sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.445389 containerd[1277]: time="2024-06-25T16:27:34.445352178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8vxgt,Uid:311c726d-a38f-4e12-bc55-98e4f6f1ab2a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.445837 kubelet[2274]: E0625 16:27:34.445789 2274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.445979 kubelet[2274]: E0625 16:27:34.445868 2274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8vxgt" Jun 25 16:27:34.445979 kubelet[2274]: E0625 16:27:34.445914 2274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8vxgt" Jun 25 16:27:34.446137 kubelet[2274]: E0625 16:27:34.445976 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8vxgt_kube-system(311c726d-a38f-4e12-bc55-98e4f6f1ab2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8vxgt_kube-system(311c726d-a38f-4e12-bc55-98e4f6f1ab2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8vxgt" podUID="311c726d-a38f-4e12-bc55-98e4f6f1ab2a" Jun 25 16:27:34.462676 containerd[1277]: time="2024-06-25T16:27:34.462600830Z" level=error msg="Failed to destroy network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.463793 containerd[1277]: time="2024-06-25T16:27:34.463719909Z" level=error msg="encountered an error cleaning up failed sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.464218 containerd[1277]: time="2024-06-25T16:27:34.464160750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ctxnk,Uid:fd5671b3-3dd7-4434-840a-28ff60f28b4d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.465813 kubelet[2274]: E0625 16:27:34.464607 2274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.465813 kubelet[2274]: E0625 16:27:34.464703 2274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ctxnk" Jun 25 16:27:34.465813 kubelet[2274]: E0625 16:27:34.464738 2274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ctxnk" Jun 25 16:27:34.466215 kubelet[2274]: E0625 16:27:34.464804 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ctxnk_kube-system(fd5671b3-3dd7-4434-840a-28ff60f28b4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ctxnk_kube-system(fd5671b3-3dd7-4434-840a-28ff60f28b4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ctxnk" podUID="fd5671b3-3dd7-4434-840a-28ff60f28b4d" Jun 25 16:27:34.693403 kubelet[2274]: E0625 16:27:34.691561 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:34.695607 containerd[1277]: time="2024-06-25T16:27:34.695553802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:27:34.696529 kubelet[2274]: I0625 16:27:34.696497 2274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:34.698623 containerd[1277]: time="2024-06-25T16:27:34.697539117Z" level=info msg="StopPodSandbox for \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\"" Jun 25 16:27:34.706160 kubelet[2274]: I0625 16:27:34.706124 2274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:34.709405 containerd[1277]: time="2024-06-25T16:27:34.708923581Z" level=info msg="StopPodSandbox for \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\"" Jun 25 16:27:34.715163 containerd[1277]: time="2024-06-25T16:27:34.715099624Z" level=info msg="Ensure that sandbox 1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca in task-service has been cleanup successfully" Jun 25 16:27:34.718742 containerd[1277]: time="2024-06-25T16:27:34.718666979Z" level=info msg="Ensure that sandbox bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b in task-service has been cleanup successfully" Jun 25 16:27:34.721013 kubelet[2274]: I0625 16:27:34.720974 2274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:34.722161 containerd[1277]: time="2024-06-25T16:27:34.722097138Z" level=info msg="StopPodSandbox for \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\"" Jun 25 16:27:34.722504 containerd[1277]: time="2024-06-25T16:27:34.722473460Z" level=info msg="Ensure that sandbox 88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a in task-service has been cleanup successfully" Jun 25 16:27:34.742072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b-shm.mount: Deactivated successfully. Jun 25 16:27:34.810940 containerd[1277]: time="2024-06-25T16:27:34.810877565Z" level=error msg="StopPodSandbox for \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\" failed" error="failed to destroy network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.811905 kubelet[2274]: E0625 16:27:34.811829 2274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:34.812098 kubelet[2274]: E0625 16:27:34.811937 2274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a"} Jun 25 16:27:34.812183 kubelet[2274]: E0625 16:27:34.812156 2274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"311c726d-a38f-4e12-bc55-98e4f6f1ab2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:34.812266 kubelet[2274]: E0625 16:27:34.812204 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"311c726d-a38f-4e12-bc55-98e4f6f1ab2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8vxgt" podUID="311c726d-a38f-4e12-bc55-98e4f6f1ab2a" Jun 25 16:27:34.831410 containerd[1277]: time="2024-06-25T16:27:34.830648320Z" level=error msg="StopPodSandbox for \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\" failed" error="failed to destroy network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.834747 kubelet[2274]: E0625 16:27:34.833507 2274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:34.835099 kubelet[2274]: E0625 16:27:34.835014 2274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca"} Jun 25 16:27:34.835341 kubelet[2274]: E0625 16:27:34.835307 2274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e379c3e-11d3-4f6c-94e5-9fc57783d470\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:34.835618 kubelet[2274]: E0625 16:27:34.835532 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e379c3e-11d3-4f6c-94e5-9fc57783d470\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d96847fc9-dl7wj" podUID="9e379c3e-11d3-4f6c-94e5-9fc57783d470" Jun 25 16:27:34.846202 containerd[1277]: time="2024-06-25T16:27:34.846112818Z" level=error msg="StopPodSandbox for \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\" failed" error="failed to destroy network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:34.846528 kubelet[2274]: E0625 16:27:34.846476 2274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:34.846674 kubelet[2274]: E0625 16:27:34.846553 2274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b"} Jun 25 16:27:34.846674 kubelet[2274]: E0625 16:27:34.846600 2274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd5671b3-3dd7-4434-840a-28ff60f28b4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:34.846674 kubelet[2274]: E0625 16:27:34.846645 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd5671b3-3dd7-4434-840a-28ff60f28b4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ctxnk" podUID="fd5671b3-3dd7-4434-840a-28ff60f28b4d" Jun 25 16:27:35.153015 kubelet[2274]: I0625 16:27:35.151412 2274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:27:35.153015 kubelet[2274]: E0625 16:27:35.152461 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:35.206000 audit[3159]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:35.206000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd524cd320 a2=0 a3=7ffd524cd30c items=0 ppid=2449 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:35.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:35.208000 audit[3159]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:35.208000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd524cd320 a2=0 a3=7ffd524cd30c items=0 ppid=2449 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:35.208000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:35.421991 systemd[1]: Created slice kubepods-besteffort-pod66a2358e_62b7_4455_bce2_ea313197d5cb.slice - libcontainer container kubepods-besteffort-pod66a2358e_62b7_4455_bce2_ea313197d5cb.slice. Jun 25 16:27:35.426954 containerd[1277]: time="2024-06-25T16:27:35.426897874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qrghb,Uid:66a2358e-62b7-4455-bce2-ea313197d5cb,Namespace:calico-system,Attempt:0,}" Jun 25 16:27:35.586660 containerd[1277]: time="2024-06-25T16:27:35.586556641Z" level=error msg="Failed to destroy network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:35.592460 containerd[1277]: time="2024-06-25T16:27:35.591110299Z" level=error msg="encountered an error cleaning up failed sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:35.592460 containerd[1277]: time="2024-06-25T16:27:35.591727400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qrghb,Uid:66a2358e-62b7-4455-bce2-ea313197d5cb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:35.590855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49-shm.mount: Deactivated successfully. Jun 25 16:27:35.595425 kubelet[2274]: E0625 16:27:35.593352 2274 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:35.595425 kubelet[2274]: E0625 16:27:35.593483 2274 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qrghb" Jun 25 16:27:35.595425 kubelet[2274]: E0625 16:27:35.593525 2274 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qrghb" Jun 25 16:27:35.596079 kubelet[2274]: E0625 16:27:35.593605 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qrghb_calico-system(66a2358e-62b7-4455-bce2-ea313197d5cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qrghb_calico-system(66a2358e-62b7-4455-bce2-ea313197d5cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qrghb" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" Jun 25 16:27:35.729282 kubelet[2274]: I0625 16:27:35.726124 2274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:35.729282 kubelet[2274]: E0625 16:27:35.727081 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:35.729504 containerd[1277]: time="2024-06-25T16:27:35.727545315Z" level=info msg="StopPodSandbox for \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\"" Jun 25 16:27:35.729504 containerd[1277]: time="2024-06-25T16:27:35.727970836Z" level=info msg="Ensure that sandbox fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49 in task-service has been cleanup successfully" Jun 25 16:27:35.791655 containerd[1277]: time="2024-06-25T16:27:35.791571525Z" level=error msg="StopPodSandbox for \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\" failed" error="failed to destroy network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:27:35.792417 kubelet[2274]: E0625 16:27:35.792331 2274 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:35.792608 kubelet[2274]: E0625 16:27:35.792416 2274 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49"} Jun 25 16:27:35.792608 kubelet[2274]: E0625 16:27:35.792469 2274 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66a2358e-62b7-4455-bce2-ea313197d5cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:27:35.792608 kubelet[2274]: E0625 16:27:35.792507 2274 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66a2358e-62b7-4455-bce2-ea313197d5cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qrghb" podUID="66a2358e-62b7-4455-bce2-ea313197d5cb" Jun 25 16:27:40.630218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562206424.mount: Deactivated successfully. Jun 25 16:27:40.682267 containerd[1277]: time="2024-06-25T16:27:40.682126170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:40.686443 containerd[1277]: time="2024-06-25T16:27:40.686352055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:27:40.688458 containerd[1277]: time="2024-06-25T16:27:40.688354071Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:40.692241 containerd[1277]: time="2024-06-25T16:27:40.692143026Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:40.695666 containerd[1277]: time="2024-06-25T16:27:40.695605806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:40.697579 containerd[1277]: time="2024-06-25T16:27:40.697467042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 6.001851621s" Jun 25 16:27:40.697869 containerd[1277]: time="2024-06-25T16:27:40.697834758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:27:40.730261 containerd[1277]: time="2024-06-25T16:27:40.730164461Z" level=info msg="CreateContainer within sandbox \"4be1d503b557c9377f3d75b137b7131d967d3417f6ebf17ece368fff2fc3d776\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:27:40.771538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450942048.mount: Deactivated successfully. Jun 25 16:27:40.780627 containerd[1277]: time="2024-06-25T16:27:40.780529963Z" level=info msg="CreateContainer within sandbox \"4be1d503b557c9377f3d75b137b7131d967d3417f6ebf17ece368fff2fc3d776\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2671c971b06e410e81d9541df40a0d6405d76603e7c2a26114b349cc77065084\"" Jun 25 16:27:40.781497 containerd[1277]: time="2024-06-25T16:27:40.781452276Z" level=info msg="StartContainer for \"2671c971b06e410e81d9541df40a0d6405d76603e7c2a26114b349cc77065084\"" Jun 25 16:27:40.828412 systemd[1]: Started cri-containerd-2671c971b06e410e81d9541df40a0d6405d76603e7c2a26114b349cc77065084.scope - libcontainer container 2671c971b06e410e81d9541df40a0d6405d76603e7c2a26114b349cc77065084. Jun 25 16:27:40.873136 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 16:27:40.873396 kernel: audit: type=1334 audit(1719332860.862:500): prog-id=129 op=LOAD Jun 25 16:27:40.862000 audit: BPF prog-id=129 op=LOAD Jun 25 16:27:40.894826 kernel: audit: type=1300 audit(1719332860.862:500): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2738 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.862000 audit[3223]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2738 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236373163393731623036653431306538316439353431646634306130 Jun 25 16:27:40.908183 kernel: audit: type=1327 audit(1719332860.862:500): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236373163393731623036653431306538316439353431646634306130 Jun 25 16:27:40.866000 audit: BPF prog-id=130 op=LOAD Jun 25 16:27:40.912159 kernel: audit: type=1334 audit(1719332860.866:501): prog-id=130 op=LOAD Jun 25 16:27:40.866000 audit[3223]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2738 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.973992 kernel: audit: type=1300 audit(1719332860.866:501): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2738 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.974149 kernel: audit: type=1327 audit(1719332860.866:501): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236373163393731623036653431306538316439353431646634306130 Jun 25 16:27:40.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236373163393731623036653431306538316439353431646634306130 Jun 25 16:27:40.866000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:27:40.978753 kernel: audit: type=1334 audit(1719332860.866:502): prog-id=130 op=UNLOAD Jun 25 16:27:40.978895 kernel: audit: type=1334 audit(1719332860.866:503): prog-id=129 op=UNLOAD Jun 25 16:27:40.866000 audit: BPF prog-id=129 op=UNLOAD Jun 25 16:27:40.866000 audit: BPF prog-id=131 op=LOAD Jun 25 16:27:40.982880 kernel: audit: type=1334 audit(1719332860.866:504): prog-id=131 op=LOAD Jun 25 16:27:40.866000 audit[3223]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2738 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:41.012049 kernel: audit: type=1300 audit(1719332860.866:504): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2738 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:40.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236373163393731623036653431306538316439353431646634306130 Jun 25 16:27:41.050402 containerd[1277]: time="2024-06-25T16:27:41.050331278Z" level=info msg="StartContainer for \"2671c971b06e410e81d9541df40a0d6405d76603e7c2a26114b349cc77065084\" returns successfully" Jun 25 16:27:41.126217 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:27:41.126390 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:27:41.175383 systemd[1]: Started sshd@7-161.35.235.79:22-139.178.89.65:40778.service - OpenSSH per-connection server daemon (139.178.89.65:40778). Jun 25 16:27:41.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-161.35.235.79:22-139.178.89.65:40778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.258316 sshd[3263]: Accepted publickey for core from 139.178.89.65 port 40778 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:41.257000 audit[3263]: USER_ACCT pid=3263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:41.258000 audit[3263]: CRED_ACQ pid=3263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:41.258000 audit[3263]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1ff92880 a2=3 a3=7f7dda23c480 items=0 ppid=1 pid=3263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:41.258000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:41.261657 sshd[3263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:41.284952 systemd-logind[1266]: New session 8 of user core. Jun 25 16:27:41.290407 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:27:41.299000 audit[3263]: USER_START pid=3263 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:41.301000 audit[3266]: CRED_ACQ pid=3266 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:41.564484 sshd[3263]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:41.566000 audit[3263]: USER_END pid=3263 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:41.567000 audit[3263]: CRED_DISP pid=3263 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:41.571575 systemd-logind[1266]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:27:41.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-161.35.235.79:22-139.178.89.65:40778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.573289 systemd[1]: sshd@7-161.35.235.79:22-139.178.89.65:40778.service: Deactivated successfully. Jun 25 16:27:41.574528 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:27:41.577776 systemd-logind[1266]: Removed session 8. Jun 25 16:27:41.774771 kubelet[2274]: E0625 16:27:41.774734 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:41.797350 kubelet[2274]: I0625 16:27:41.797281 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rr8wv" podStartSLOduration=2.269486598 podStartE2EDuration="20.797261078s" podCreationTimestamp="2024-06-25 16:27:21 +0000 UTC" firstStartedPulling="2024-06-25 16:27:22.171240086 +0000 UTC m=+25.047843295" lastFinishedPulling="2024-06-25 16:27:40.699014565 +0000 UTC m=+43.575617775" observedRunningTime="2024-06-25 16:27:41.79683604 +0000 UTC m=+44.673439270" watchObservedRunningTime="2024-06-25 16:27:41.797261078 +0000 UTC m=+44.673864309" Jun 25 16:27:42.775573 kubelet[2274]: I0625 16:27:42.775531 2274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:27:42.777439 kubelet[2274]: E0625 16:27:42.777393 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:42.884000 audit[3340]: AVC avc: denied { write } for pid=3340 comm="tee" name="fd" dev="proc" ino=25209 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:42.902000 audit[3332]: AVC avc: denied { write } for pid=3332 comm="tee" name="fd" dev="proc" ino=25707 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:42.902000 audit[3332]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed5815a11 a2=241 a3=1b6 items=1 ppid=3303 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.902000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:27:42.902000 audit: PATH item=0 name="/dev/fd/63" inode=25195 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:42.904000 audit[3352]: AVC avc: denied { write } for pid=3352 comm="tee" name="fd" dev="proc" ino=25218 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:42.902000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:42.884000 audit[3340]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe22718a12 a2=241 a3=1b6 items=1 ppid=3301 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.884000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:27:42.884000 audit: PATH item=0 name="/dev/fd/63" inode=25202 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:42.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:42.904000 audit[3352]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8b21ca11 a2=241 a3=1b6 items=1 ppid=3305 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.904000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:27:42.904000 audit: PATH item=0 name="/dev/fd/63" inode=25215 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:42.904000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:42.930000 audit[3346]: AVC avc: denied { write } for pid=3346 comm="tee" name="fd" dev="proc" ino=25716 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:42.930000 audit[3346]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdfed13a02 a2=241 a3=1b6 items=1 ppid=3310 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.930000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:27:42.930000 audit: PATH item=0 name="/dev/fd/63" inode=25212 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:42.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:42.945000 audit[3360]: AVC avc: denied { write } for pid=3360 comm="tee" name="fd" dev="proc" ino=25720 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:42.945000 audit[3360]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc94409a13 a2=241 a3=1b6 items=1 ppid=3300 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.945000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:27:42.945000 audit: PATH item=0 name="/dev/fd/63" inode=25219 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:42.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:42.967000 audit[3369]: AVC avc: denied { write } for pid=3369 comm="tee" name="fd" dev="proc" ino=25731 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:42.967000 audit[3369]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe01b1aa01 a2=241 a3=1b6 items=1 ppid=3318 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.967000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:27:42.967000 audit: PATH item=0 name="/dev/fd/63" inode=25223 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:42.967000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:42.970000 audit[3377]: AVC avc: denied { write } for pid=3377 comm="tee" name="fd" dev="proc" ino=25226 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:27:42.970000 audit[3377]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffdbd27a11 a2=241 a3=1b6 items=1 ppid=3314 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.970000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:27:42.970000 audit: PATH item=0 name="/dev/fd/63" inode=25728 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:27:42.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:27:43.582224 systemd-networkd[1091]: vxlan.calico: Link UP Jun 25 16:27:43.582800 systemd-networkd[1091]: vxlan.calico: Gained carrier Jun 25 16:27:43.633000 audit: BPF prog-id=132 op=LOAD Jun 25 16:27:43.633000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc3f3375c0 a2=70 a3=7f774d6d7000 items=0 ppid=3306 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.633000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:27:43.633000 audit: BPF prog-id=132 op=UNLOAD Jun 25 16:27:43.633000 audit: BPF prog-id=133 op=LOAD Jun 25 16:27:43.633000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc3f3375c0 a2=70 a3=6f items=0 ppid=3306 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.633000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:27:43.633000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:27:43.633000 audit: BPF prog-id=134 op=LOAD Jun 25 16:27:43.633000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc3f337550 a2=70 a3=7ffc3f3375c0 items=0 ppid=3306 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.633000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:27:43.633000 audit: BPF prog-id=134 op=UNLOAD Jun 25 16:27:43.634000 audit: BPF prog-id=135 op=LOAD Jun 25 16:27:43.634000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc3f337580 a2=70 a3=0 items=0 ppid=3306 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.634000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:27:43.662000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:27:43.768000 audit[3470]: NETFILTER_CFG table=raw:97 family=2 entries=19 op=nft_register_chain pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:43.768000 audit[3470]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffc2de7f160 a2=0 a3=7ffc2de7f14c items=0 ppid=3306 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.768000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:43.771000 audit[3471]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=3471 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:43.771000 audit[3471]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe88c330f0 a2=0 a3=7ffe88c330dc items=0 ppid=3306 pid=3471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.771000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:43.772000 audit[3472]: NETFILTER_CFG table=mangle:99 family=2 entries=16 op=nft_register_chain pid=3472 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:43.772000 audit[3472]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe59be8990 a2=0 a3=7ffe59be897c items=0 ppid=3306 pid=3472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.772000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:43.785000 audit[3473]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3473 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:43.785000 audit[3473]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffdea829560 a2=0 a3=7ffdea82954c items=0 ppid=3306 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:43.785000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:44.970314 systemd-networkd[1091]: vxlan.calico: Gained IPv6LL Jun 25 16:27:46.413978 containerd[1277]: time="2024-06-25T16:27:46.413203012Z" level=info msg="StopPodSandbox for \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\"" Jun 25 16:27:46.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-161.35.235.79:22-139.178.89.65:33998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:46.590332 kernel: kauditd_printk_skb: 75 callbacks suppressed Jun 25 16:27:46.590410 kernel: audit: type=1130 audit(1719332866.588:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-161.35.235.79:22-139.178.89.65:33998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:46.589064 systemd[1]: Started sshd@8-161.35.235.79:22-139.178.89.65:33998.service - OpenSSH per-connection server daemon (139.178.89.65:33998). Jun 25 16:27:46.696000 audit[3511]: USER_ACCT pid=3511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:46.700237 sshd[3511]: Accepted publickey for core from 139.178.89.65 port 33998 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:46.702087 kernel: audit: type=1101 audit(1719332866.696:534): pid=3511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:46.702000 audit[3511]: CRED_ACQ pid=3511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:46.705638 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:46.709905 kernel: audit: type=1103 audit(1719332866.702:535): pid=3511 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:46.710056 kernel: audit: type=1006 audit(1719332866.702:536): pid=3511 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jun 25 16:27:46.702000 audit[3511]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4fa55670 a2=3 a3=7efe0d90a480 items=0 ppid=1 pid=3511 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:46.718145 kernel: audit: type=1300 audit(1719332866.702:536): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4fa55670 a2=3 a3=7efe0d90a480 items=0 ppid=1 pid=3511 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:46.718329 kernel: audit: type=1327 audit(1719332866.702:536): proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:46.702000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:46.721901 systemd-logind[1266]: New session 9 of user core. Jun 25 16:27:46.726373 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:27:46.745993 kernel: audit: type=1105 audit(1719332866.736:537): pid=3511 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:46.746224 kernel: audit: type=1103 audit(1719332866.736:538): pid=3513 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:46.736000 audit[3511]: USER_START pid=3511 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:46.736000 audit[3513]: CRED_ACQ pid=3513 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.507 [INFO][3500] k8s.go 608: Cleaning up netns ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.507 [INFO][3500] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" iface="eth0" netns="/var/run/netns/cni-48a10b40-841f-76d4-6049-54ac9aa71e27" Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.508 [INFO][3500] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" iface="eth0" netns="/var/run/netns/cni-48a10b40-841f-76d4-6049-54ac9aa71e27" Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.508 [INFO][3500] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" iface="eth0" netns="/var/run/netns/cni-48a10b40-841f-76d4-6049-54ac9aa71e27" Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.508 [INFO][3500] k8s.go 615: Releasing IP address(es) ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.508 [INFO][3500] utils.go 188: Calico CNI releasing IP address ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.779 [INFO][3506] ipam_plugin.go 411: Releasing address using handleID ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" HandleID="k8s-pod-network.1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.781 [INFO][3506] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.782 [INFO][3506] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.806 [WARNING][3506] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" HandleID="k8s-pod-network.1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.807 [INFO][3506] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" HandleID="k8s-pod-network.1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.811 [INFO][3506] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:46.817865 containerd[1277]: 2024-06-25 16:27:46.814 [INFO][3500] k8s.go 621: Teardown processing complete. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:46.822605 containerd[1277]: time="2024-06-25T16:27:46.822231984Z" level=info msg="TearDown network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\" successfully" Jun 25 16:27:46.822605 containerd[1277]: time="2024-06-25T16:27:46.822291197Z" level=info msg="StopPodSandbox for \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\" returns successfully" Jun 25 16:27:46.822953 systemd[1]: run-netns-cni\x2d48a10b40\x2d841f\x2d76d4\x2d6049\x2d54ac9aa71e27.mount: Deactivated successfully. Jun 25 16:27:46.857379 containerd[1277]: time="2024-06-25T16:27:46.857331128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d96847fc9-dl7wj,Uid:9e379c3e-11d3-4f6c-94e5-9fc57783d470,Namespace:calico-system,Attempt:1,}" Jun 25 16:27:47.032929 sshd[3511]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:47.036000 audit[3511]: USER_END pid=3511 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:47.036000 audit[3511]: CRED_DISP pid=3511 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:47.041353 systemd[1]: sshd@8-161.35.235.79:22-139.178.89.65:33998.service: Deactivated successfully. Jun 25 16:27:47.042778 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:27:47.045236 kernel: audit: type=1106 audit(1719332867.036:539): pid=3511 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:47.045288 kernel: audit: type=1104 audit(1719332867.036:540): pid=3511 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:47.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-161.35.235.79:22-139.178.89.65:33998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:47.047298 systemd-logind[1266]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:27:47.049380 systemd-logind[1266]: Removed session 9. Jun 25 16:27:47.184438 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:27:47.184639 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali405fa104dbb: link becomes ready Jun 25 16:27:47.183624 systemd-networkd[1091]: cali405fa104dbb: Link UP Jun 25 16:27:47.187140 systemd-networkd[1091]: cali405fa104dbb: Gained carrier Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:46.970 [INFO][3524] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0 calico-kube-controllers-d96847fc9- calico-system 9e379c3e-11d3-4f6c-94e5-9fc57783d470 823 0 2024-06-25 16:27:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d96847fc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3815.2.4-0-d0607f9d2c calico-kube-controllers-d96847fc9-dl7wj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali405fa104dbb [] []}} ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Namespace="calico-system" Pod="calico-kube-controllers-d96847fc9-dl7wj" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:46.973 [INFO][3524] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Namespace="calico-system" Pod="calico-kube-controllers-d96847fc9-dl7wj" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.068 [INFO][3535] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" HandleID="k8s-pod-network.ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.085 [INFO][3535] ipam_plugin.go 264: Auto assigning IP ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" HandleID="k8s-pod-network.ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5e20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-0-d0607f9d2c", "pod":"calico-kube-controllers-d96847fc9-dl7wj", "timestamp":"2024-06-25 16:27:47.068970245 +0000 UTC"}, Hostname:"ci-3815.2.4-0-d0607f9d2c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.085 [INFO][3535] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.085 [INFO][3535] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.085 [INFO][3535] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-0-d0607f9d2c' Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.102 [INFO][3535] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.117 [INFO][3535] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.127 [INFO][3535] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.131 [INFO][3535] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.136 [INFO][3535] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.137 [INFO][3535] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.139 [INFO][3535] ipam.go 1685: Creating new handle: k8s-pod-network.ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.153 [INFO][3535] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.164 [INFO][3535] ipam.go 1216: Successfully claimed IPs: [192.168.34.65/26] block=192.168.34.64/26 handle="k8s-pod-network.ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.164 [INFO][3535] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.65/26] handle="k8s-pod-network.ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.164 [INFO][3535] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:47.214597 containerd[1277]: 2024-06-25 16:27:47.164 [INFO][3535] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.65/26] IPv6=[] ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" HandleID="k8s-pod-network.ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:47.217002 containerd[1277]: 2024-06-25 16:27:47.168 [INFO][3524] k8s.go 386: Populated endpoint ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Namespace="calico-system" Pod="calico-kube-controllers-d96847fc9-dl7wj" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0", GenerateName:"calico-kube-controllers-d96847fc9-", Namespace:"calico-system", SelfLink:"", UID:"9e379c3e-11d3-4f6c-94e5-9fc57783d470", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d96847fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"", Pod:"calico-kube-controllers-d96847fc9-dl7wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali405fa104dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:47.217002 containerd[1277]: 2024-06-25 16:27:47.168 [INFO][3524] k8s.go 387: Calico CNI using IPs: [192.168.34.65/32] ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Namespace="calico-system" Pod="calico-kube-controllers-d96847fc9-dl7wj" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:47.217002 containerd[1277]: 2024-06-25 16:27:47.168 [INFO][3524] dataplane_linux.go 68: Setting the host side veth name to cali405fa104dbb ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Namespace="calico-system" Pod="calico-kube-controllers-d96847fc9-dl7wj" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:47.217002 containerd[1277]: 2024-06-25 16:27:47.188 [INFO][3524] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Namespace="calico-system" Pod="calico-kube-controllers-d96847fc9-dl7wj" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:47.217002 containerd[1277]: 2024-06-25 16:27:47.188 [INFO][3524] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Namespace="calico-system" Pod="calico-kube-controllers-d96847fc9-dl7wj" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0", GenerateName:"calico-kube-controllers-d96847fc9-", Namespace:"calico-system", SelfLink:"", UID:"9e379c3e-11d3-4f6c-94e5-9fc57783d470", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d96847fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c", Pod:"calico-kube-controllers-d96847fc9-dl7wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali405fa104dbb", MAC:"56:d9:07:d3:51:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:47.217002 containerd[1277]: 2024-06-25 16:27:47.205 [INFO][3524] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c" Namespace="calico-system" Pod="calico-kube-controllers-d96847fc9-dl7wj" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:47.241000 audit[3556]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3556 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:47.241000 audit[3556]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffc17a3e910 a2=0 a3=7ffc17a3e8fc items=0 ppid=3306 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:47.241000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:47.306328 containerd[1277]: time="2024-06-25T16:27:47.302593071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:47.306328 containerd[1277]: time="2024-06-25T16:27:47.302712495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:47.307005 containerd[1277]: time="2024-06-25T16:27:47.302755404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:47.307005 containerd[1277]: time="2024-06-25T16:27:47.306857914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:47.347488 systemd[1]: Started cri-containerd-ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c.scope - libcontainer container ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c. Jun 25 16:27:47.384000 audit: BPF prog-id=136 op=LOAD Jun 25 16:27:47.385000 audit: BPF prog-id=137 op=LOAD Jun 25 16:27:47.385000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3565 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:47.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386438303332363365656430393363306639333333656466366539 Jun 25 16:27:47.385000 audit: BPF prog-id=138 op=LOAD Jun 25 16:27:47.385000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3565 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:47.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386438303332363365656430393363306639333333656466366539 Jun 25 16:27:47.385000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:27:47.385000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:27:47.385000 audit: BPF prog-id=139 op=LOAD Jun 25 16:27:47.385000 audit[3575]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3565 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:47.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162386438303332363365656430393363306639333333656466366539 Jun 25 16:27:47.454501 containerd[1277]: time="2024-06-25T16:27:47.454444599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d96847fc9-dl7wj,Uid:9e379c3e-11d3-4f6c-94e5-9fc57783d470,Namespace:calico-system,Attempt:1,} returns sandbox id \"ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c\"" Jun 25 16:27:47.467816 containerd[1277]: time="2024-06-25T16:27:47.467763554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:27:48.412968 containerd[1277]: time="2024-06-25T16:27:48.412888247Z" level=info msg="StopPodSandbox for \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\"" Jun 25 16:27:48.413640 containerd[1277]: time="2024-06-25T16:27:48.413585996Z" level=info msg="StopPodSandbox for \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\"" Jun 25 16:27:48.413802 containerd[1277]: time="2024-06-25T16:27:48.413764884Z" level=info msg="StopPodSandbox for \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\"" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.568 [INFO][3634] k8s.go 608: Cleaning up netns ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.569 [INFO][3634] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" iface="eth0" netns="/var/run/netns/cni-e3cf5786-d18b-79b3-e368-781ca33e0b7b" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.569 [INFO][3634] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" iface="eth0" netns="/var/run/netns/cni-e3cf5786-d18b-79b3-e368-781ca33e0b7b" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.569 [INFO][3634] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" iface="eth0" netns="/var/run/netns/cni-e3cf5786-d18b-79b3-e368-781ca33e0b7b" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.569 [INFO][3634] k8s.go 615: Releasing IP address(es) ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.569 [INFO][3634] utils.go 188: Calico CNI releasing IP address ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.681 [INFO][3660] ipam_plugin.go 411: Releasing address using handleID ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" HandleID="k8s-pod-network.88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.681 [INFO][3660] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.681 [INFO][3660] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.694 [WARNING][3660] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" HandleID="k8s-pod-network.88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.694 [INFO][3660] ipam_plugin.go 439: Releasing address using workloadID ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" HandleID="k8s-pod-network.88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.697 [INFO][3660] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:48.705356 containerd[1277]: 2024-06-25 16:27:48.701 [INFO][3634] k8s.go 621: Teardown processing complete. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:48.710690 systemd[1]: run-netns-cni\x2de3cf5786\x2dd18b\x2d79b3\x2de368\x2d781ca33e0b7b.mount: Deactivated successfully. Jun 25 16:27:48.714401 containerd[1277]: time="2024-06-25T16:27:48.713374938Z" level=info msg="TearDown network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\" successfully" Jun 25 16:27:48.714401 containerd[1277]: time="2024-06-25T16:27:48.713500591Z" level=info msg="StopPodSandbox for \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\" returns successfully" Jun 25 16:27:48.714759 kubelet[2274]: E0625 16:27:48.714704 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:48.717640 containerd[1277]: time="2024-06-25T16:27:48.717551207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8vxgt,Uid:311c726d-a38f-4e12-bc55-98e4f6f1ab2a,Namespace:kube-system,Attempt:1,}" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.601 [INFO][3652] k8s.go 608: Cleaning up netns ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.602 [INFO][3652] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" iface="eth0" netns="/var/run/netns/cni-b463a0db-b0ce-8dc4-4c82-2e1e307cf0df" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.602 [INFO][3652] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" iface="eth0" netns="/var/run/netns/cni-b463a0db-b0ce-8dc4-4c82-2e1e307cf0df" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.602 [INFO][3652] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" iface="eth0" netns="/var/run/netns/cni-b463a0db-b0ce-8dc4-4c82-2e1e307cf0df" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.602 [INFO][3652] k8s.go 615: Releasing IP address(es) ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.602 [INFO][3652] utils.go 188: Calico CNI releasing IP address ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.735 [INFO][3669] ipam_plugin.go 411: Releasing address using handleID ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" HandleID="k8s-pod-network.fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.735 [INFO][3669] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.735 [INFO][3669] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.746 [WARNING][3669] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" HandleID="k8s-pod-network.fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.747 [INFO][3669] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" HandleID="k8s-pod-network.fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.750 [INFO][3669] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:48.765110 containerd[1277]: 2024-06-25 16:27:48.754 [INFO][3652] k8s.go 621: Teardown processing complete. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:48.769674 containerd[1277]: time="2024-06-25T16:27:48.766029026Z" level=info msg="TearDown network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\" successfully" Jun 25 16:27:48.769674 containerd[1277]: time="2024-06-25T16:27:48.766320047Z" level=info msg="StopPodSandbox for \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\" returns successfully" Jun 25 16:27:48.770809 containerd[1277]: time="2024-06-25T16:27:48.770744427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qrghb,Uid:66a2358e-62b7-4455-bce2-ea313197d5cb,Namespace:calico-system,Attempt:1,}" Jun 25 16:27:48.777074 systemd[1]: run-netns-cni\x2db463a0db\x2db0ce\x2d8dc4\x2d4c82\x2d2e1e307cf0df.mount: Deactivated successfully. Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.579 [INFO][3639] k8s.go 608: Cleaning up netns ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.580 [INFO][3639] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" iface="eth0" netns="/var/run/netns/cni-28cba6ae-c32f-de15-0ec2-57a146d509a9" Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.580 [INFO][3639] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" iface="eth0" netns="/var/run/netns/cni-28cba6ae-c32f-de15-0ec2-57a146d509a9" Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.581 [INFO][3639] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" iface="eth0" netns="/var/run/netns/cni-28cba6ae-c32f-de15-0ec2-57a146d509a9" Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.581 [INFO][3639] k8s.go 615: Releasing IP address(es) ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.581 [INFO][3639] utils.go 188: Calico CNI releasing IP address ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.807 [INFO][3665] ipam_plugin.go 411: Releasing address using handleID ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" HandleID="k8s-pod-network.bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.808 [INFO][3665] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.808 [INFO][3665] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.826 [WARNING][3665] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" HandleID="k8s-pod-network.bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.826 [INFO][3665] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" HandleID="k8s-pod-network.bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.833 [INFO][3665] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:48.842609 containerd[1277]: 2024-06-25 16:27:48.837 [INFO][3639] k8s.go 621: Teardown processing complete. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:48.851554 containerd[1277]: time="2024-06-25T16:27:48.851471277Z" level=info msg="TearDown network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\" successfully" Jun 25 16:27:48.851809 containerd[1277]: time="2024-06-25T16:27:48.851769511Z" level=info msg="StopPodSandbox for \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\" returns successfully" Jun 25 16:27:48.853834 kubelet[2274]: E0625 16:27:48.853580 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:48.857416 containerd[1277]: time="2024-06-25T16:27:48.857353788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ctxnk,Uid:fd5671b3-3dd7-4434-840a-28ff60f28b4d,Namespace:kube-system,Attempt:1,}" Jun 25 16:27:48.940325 systemd-networkd[1091]: cali405fa104dbb: Gained IPv6LL Jun 25 16:27:49.304065 systemd-networkd[1091]: calie4a717f2c16: Link UP Jun 25 16:27:49.308576 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:27:49.308736 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie4a717f2c16: link becomes ready Jun 25 16:27:49.308997 systemd-networkd[1091]: calie4a717f2c16: Gained carrier Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:48.998 [INFO][3678] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0 coredns-7db6d8ff4d- kube-system 311c726d-a38f-4e12-bc55-98e4f6f1ab2a 844 0 2024-06-25 16:27:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-0-d0607f9d2c coredns-7db6d8ff4d-8vxgt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4a717f2c16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vxgt" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:48.998 [INFO][3678] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vxgt" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.154 [INFO][3716] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" HandleID="k8s-pod-network.f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.182 [INFO][3716] ipam_plugin.go 264: Auto assigning IP ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" HandleID="k8s-pod-network.f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000488ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-0-d0607f9d2c", "pod":"coredns-7db6d8ff4d-8vxgt", "timestamp":"2024-06-25 16:27:49.154302978 +0000 UTC"}, Hostname:"ci-3815.2.4-0-d0607f9d2c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.182 [INFO][3716] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.183 [INFO][3716] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.183 [INFO][3716] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-0-d0607f9d2c' Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.188 [INFO][3716] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.203 [INFO][3716] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.214 [INFO][3716] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.218 [INFO][3716] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.224 [INFO][3716] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.224 [INFO][3716] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.230 [INFO][3716] ipam.go 1685: Creating new handle: k8s-pod-network.f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0 Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.241 [INFO][3716] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.260 [INFO][3716] ipam.go 1216: Successfully claimed IPs: [192.168.34.66/26] block=192.168.34.64/26 handle="k8s-pod-network.f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.260 [INFO][3716] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.66/26] handle="k8s-pod-network.f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.260 [INFO][3716] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:49.346543 containerd[1277]: 2024-06-25 16:27:49.260 [INFO][3716] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.66/26] IPv6=[] ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" HandleID="k8s-pod-network.f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:49.348146 containerd[1277]: 2024-06-25 16:27:49.267 [INFO][3678] k8s.go 386: Populated endpoint ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vxgt" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"311c726d-a38f-4e12-bc55-98e4f6f1ab2a", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"", Pod:"coredns-7db6d8ff4d-8vxgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4a717f2c16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:49.348146 containerd[1277]: 2024-06-25 16:27:49.267 [INFO][3678] k8s.go 387: Calico CNI using IPs: [192.168.34.66/32] ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vxgt" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:49.348146 containerd[1277]: 2024-06-25 16:27:49.267 [INFO][3678] dataplane_linux.go 68: Setting the host side veth name to calie4a717f2c16 ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vxgt" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:49.348146 containerd[1277]: 2024-06-25 16:27:49.311 [INFO][3678] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vxgt" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:49.348146 containerd[1277]: 2024-06-25 16:27:49.315 [INFO][3678] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vxgt" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"311c726d-a38f-4e12-bc55-98e4f6f1ab2a", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0", Pod:"coredns-7db6d8ff4d-8vxgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4a717f2c16", MAC:"ce:c4:60:a3:2f:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:49.348146 containerd[1277]: 2024-06-25 16:27:49.339 [INFO][3678] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vxgt" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:49.393369 systemd-networkd[1091]: calid223a7ea0a7: Link UP Jun 25 16:27:49.395168 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid223a7ea0a7: link becomes ready Jun 25 16:27:49.394822 systemd-networkd[1091]: calid223a7ea0a7: Gained carrier Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.046 [INFO][3692] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0 csi-node-driver- calico-system 66a2358e-62b7-4455-bce2-ea313197d5cb 846 0 2024-06-25 16:27:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3815.2.4-0-d0607f9d2c csi-node-driver-qrghb eth0 default [] [] [kns.calico-system ksa.calico-system.default] calid223a7ea0a7 [] []}} ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Namespace="calico-system" Pod="csi-node-driver-qrghb" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.046 [INFO][3692] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Namespace="calico-system" Pod="csi-node-driver-qrghb" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.174 [INFO][3724] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" HandleID="k8s-pod-network.f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.199 [INFO][3724] ipam_plugin.go 264: Auto assigning IP ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" HandleID="k8s-pod-network.f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a65a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-0-d0607f9d2c", "pod":"csi-node-driver-qrghb", "timestamp":"2024-06-25 16:27:49.174728834 +0000 UTC"}, Hostname:"ci-3815.2.4-0-d0607f9d2c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.208 [INFO][3724] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.260 [INFO][3724] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.261 [INFO][3724] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-0-d0607f9d2c' Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.269 [INFO][3724] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.309 [INFO][3724] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.328 [INFO][3724] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.332 [INFO][3724] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.344 [INFO][3724] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.344 [INFO][3724] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.349 [INFO][3724] ipam.go 1685: Creating new handle: k8s-pod-network.f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700 Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.360 [INFO][3724] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.373 [INFO][3724] ipam.go 1216: Successfully claimed IPs: [192.168.34.67/26] block=192.168.34.64/26 handle="k8s-pod-network.f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.373 [INFO][3724] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.67/26] handle="k8s-pod-network.f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.373 [INFO][3724] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:49.436010 containerd[1277]: 2024-06-25 16:27:49.374 [INFO][3724] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.67/26] IPv6=[] ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" HandleID="k8s-pod-network.f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:49.437429 containerd[1277]: 2024-06-25 16:27:49.377 [INFO][3692] k8s.go 386: Populated endpoint ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Namespace="calico-system" Pod="csi-node-driver-qrghb" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66a2358e-62b7-4455-bce2-ea313197d5cb", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"", Pod:"csi-node-driver-qrghb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid223a7ea0a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:49.437429 containerd[1277]: 2024-06-25 16:27:49.377 [INFO][3692] k8s.go 387: Calico CNI using IPs: [192.168.34.67/32] ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Namespace="calico-system" Pod="csi-node-driver-qrghb" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:49.437429 containerd[1277]: 2024-06-25 16:27:49.377 [INFO][3692] dataplane_linux.go 68: Setting the host side veth name to calid223a7ea0a7 ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Namespace="calico-system" Pod="csi-node-driver-qrghb" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:49.437429 containerd[1277]: 2024-06-25 16:27:49.395 [INFO][3692] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Namespace="calico-system" Pod="csi-node-driver-qrghb" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:49.437429 containerd[1277]: 2024-06-25 16:27:49.395 [INFO][3692] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Namespace="calico-system" Pod="csi-node-driver-qrghb" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66a2358e-62b7-4455-bce2-ea313197d5cb", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700", Pod:"csi-node-driver-qrghb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid223a7ea0a7", MAC:"ee:01:31:b1:82:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:49.437429 containerd[1277]: 2024-06-25 16:27:49.423 [INFO][3692] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700" Namespace="calico-system" Pod="csi-node-driver-qrghb" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:49.483000 audit[3765]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=3765 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:49.483000 audit[3765]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffee488e130 a2=0 a3=7ffee488e11c items=0 ppid=3306 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.483000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:49.496197 systemd-networkd[1091]: calib64ac3ce55c: Link UP Jun 25 16:27:49.501271 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib64ac3ce55c: link becomes ready Jun 25 16:27:49.500217 systemd-networkd[1091]: calib64ac3ce55c: Gained carrier Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.103 [INFO][3705] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0 coredns-7db6d8ff4d- kube-system fd5671b3-3dd7-4434-840a-28ff60f28b4d 845 0 2024-06-25 16:27:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-0-d0607f9d2c coredns-7db6d8ff4d-ctxnk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib64ac3ce55c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ctxnk" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.103 [INFO][3705] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ctxnk" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.271 [INFO][3730] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" HandleID="k8s-pod-network.685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.331 [INFO][3730] ipam_plugin.go 264: Auto assigning IP ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" HandleID="k8s-pod-network.685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267980), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-0-d0607f9d2c", "pod":"coredns-7db6d8ff4d-ctxnk", "timestamp":"2024-06-25 16:27:49.271125115 +0000 UTC"}, Hostname:"ci-3815.2.4-0-d0607f9d2c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.331 [INFO][3730] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.374 [INFO][3730] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.374 [INFO][3730] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-0-d0607f9d2c' Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.396 [INFO][3730] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.418 [INFO][3730] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.435 [INFO][3730] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.442 [INFO][3730] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.447 [INFO][3730] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.448 [INFO][3730] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.451 [INFO][3730] ipam.go 1685: Creating new handle: k8s-pod-network.685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1 Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.460 [INFO][3730] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.472 [INFO][3730] ipam.go 1216: Successfully claimed IPs: [192.168.34.68/26] block=192.168.34.64/26 handle="k8s-pod-network.685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.473 [INFO][3730] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.68/26] handle="k8s-pod-network.685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.473 [INFO][3730] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:49.544316 containerd[1277]: 2024-06-25 16:27:49.473 [INFO][3730] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.68/26] IPv6=[] ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" HandleID="k8s-pod-network.685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:49.545730 containerd[1277]: 2024-06-25 16:27:49.475 [INFO][3705] k8s.go 386: Populated endpoint ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ctxnk" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fd5671b3-3dd7-4434-840a-28ff60f28b4d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"", Pod:"coredns-7db6d8ff4d-ctxnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib64ac3ce55c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:49.545730 containerd[1277]: 2024-06-25 16:27:49.476 [INFO][3705] k8s.go 387: Calico CNI using IPs: [192.168.34.68/32] ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ctxnk" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:49.545730 containerd[1277]: 2024-06-25 16:27:49.476 [INFO][3705] dataplane_linux.go 68: Setting the host side veth name to calib64ac3ce55c ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ctxnk" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:49.545730 containerd[1277]: 2024-06-25 16:27:49.502 [INFO][3705] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ctxnk" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:49.545730 containerd[1277]: 2024-06-25 16:27:49.505 [INFO][3705] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ctxnk" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fd5671b3-3dd7-4434-840a-28ff60f28b4d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1", Pod:"coredns-7db6d8ff4d-ctxnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib64ac3ce55c", MAC:"e2:af:26:60:2e:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:49.545730 containerd[1277]: 2024-06-25 16:27:49.536 [INFO][3705] k8s.go 500: Wrote updated endpoint to datastore ContainerID="685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ctxnk" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:49.563835 containerd[1277]: time="2024-06-25T16:27:49.560200482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:49.563835 containerd[1277]: time="2024-06-25T16:27:49.560377684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:49.563835 containerd[1277]: time="2024-06-25T16:27:49.560437701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:49.563835 containerd[1277]: time="2024-06-25T16:27:49.560456734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:49.571000 audit[3795]: NETFILTER_CFG table=filter:103 family=2 entries=38 op=nft_register_chain pid=3795 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:49.571000 audit[3795]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7fffc86b46a0 a2=0 a3=7fffc86b468c items=0 ppid=3306 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.571000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:49.603363 systemd[1]: Started cri-containerd-f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0.scope - libcontainer container f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0. Jun 25 16:27:49.614617 containerd[1277]: time="2024-06-25T16:27:49.614454233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:49.614617 containerd[1277]: time="2024-06-25T16:27:49.614566580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:49.614969 containerd[1277]: time="2024-06-25T16:27:49.614599684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:49.614969 containerd[1277]: time="2024-06-25T16:27:49.614623155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:49.652000 audit: BPF prog-id=140 op=LOAD Jun 25 16:27:49.654000 audit: BPF prog-id=141 op=LOAD Jun 25 16:27:49.654000 audit[3800]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3774 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663132366631316336356535323630353736613635326134333732 Jun 25 16:27:49.655000 audit: BPF prog-id=142 op=LOAD Jun 25 16:27:49.655000 audit[3800]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3774 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663132366631316336356535323630353736613635326134333732 Jun 25 16:27:49.655000 audit: BPF prog-id=142 op=UNLOAD Jun 25 16:27:49.655000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:27:49.655000 audit: BPF prog-id=143 op=LOAD Jun 25 16:27:49.655000 audit[3800]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3774 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663132366631316336356535323630353736613635326134333732 Jun 25 16:27:49.679000 audit[3834]: NETFILTER_CFG table=filter:104 family=2 entries=38 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:27:49.679000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffffca96c50 a2=0 a3=7ffffca96c3c items=0 ppid=3306 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.679000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:27:49.702425 systemd[1]: Started cri-containerd-f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700.scope - libcontainer container f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700. Jun 25 16:27:49.719402 systemd[1]: run-netns-cni\x2d28cba6ae\x2dc32f\x2dde15\x2d0ec2\x2d57a146d509a9.mount: Deactivated successfully. Jun 25 16:27:49.749000 audit: BPF prog-id=144 op=LOAD Jun 25 16:27:49.751000 audit: BPF prog-id=145 op=LOAD Jun 25 16:27:49.751000 audit[3830]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3801 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631616339663163336638343536666162396432663161336130666634 Jun 25 16:27:49.751000 audit: BPF prog-id=146 op=LOAD Jun 25 16:27:49.751000 audit[3830]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3801 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631616339663163336638343536666162396432663161336130666634 Jun 25 16:27:49.751000 audit: BPF prog-id=146 op=UNLOAD Jun 25 16:27:49.751000 audit: BPF prog-id=145 op=UNLOAD Jun 25 16:27:49.751000 audit: BPF prog-id=147 op=LOAD Jun 25 16:27:49.751000 audit[3830]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3801 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631616339663163336638343536666162396432663161336130666634 Jun 25 16:27:49.821866 containerd[1277]: time="2024-06-25T16:27:49.821712239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8vxgt,Uid:311c726d-a38f-4e12-bc55-98e4f6f1ab2a,Namespace:kube-system,Attempt:1,} returns sandbox id \"f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0\"" Jun 25 16:27:49.825735 kubelet[2274]: E0625 16:27:49.825691 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:49.852065 containerd[1277]: time="2024-06-25T16:27:49.851966069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qrghb,Uid:66a2358e-62b7-4455-bce2-ea313197d5cb,Namespace:calico-system,Attempt:1,} returns sandbox id \"f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700\"" Jun 25 16:27:49.898174 containerd[1277]: time="2024-06-25T16:27:49.885300950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:27:49.898174 containerd[1277]: time="2024-06-25T16:27:49.885400995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:49.898174 containerd[1277]: time="2024-06-25T16:27:49.885426868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:27:49.898174 containerd[1277]: time="2024-06-25T16:27:49.885445716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:27:49.913134 containerd[1277]: time="2024-06-25T16:27:49.912773907Z" level=info msg="CreateContainer within sandbox \"f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:27:49.934951 systemd[1]: Started cri-containerd-685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1.scope - libcontainer container 685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1. Jun 25 16:27:49.948371 systemd[1]: run-containerd-runc-k8s.io-685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1-runc.rXuIO9.mount: Deactivated successfully. Jun 25 16:27:49.989000 audit: BPF prog-id=148 op=LOAD Jun 25 16:27:49.991000 audit: BPF prog-id=149 op=LOAD Jun 25 16:27:49.991000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638353933336666386465613666303961656639303162346335666539 Jun 25 16:27:49.992000 audit: BPF prog-id=150 op=LOAD Jun 25 16:27:49.992000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638353933336666386465613666303961656639303162346335666539 Jun 25 16:27:49.993000 audit: BPF prog-id=150 op=UNLOAD Jun 25 16:27:49.993000 audit: BPF prog-id=149 op=UNLOAD Jun 25 16:27:49.994000 audit: BPF prog-id=151 op=LOAD Jun 25 16:27:49.994000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:49.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638353933336666386465613666303961656639303162346335666539 Jun 25 16:27:50.048593 containerd[1277]: time="2024-06-25T16:27:50.048522630Z" level=info msg="CreateContainer within sandbox \"f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50825dde97a04085f5742f6ecfb3f88d8572cdfbf80c97d342320d09d5b7b604\"" Jun 25 16:27:50.054257 containerd[1277]: time="2024-06-25T16:27:50.053933977Z" level=info msg="StartContainer for \"50825dde97a04085f5742f6ecfb3f88d8572cdfbf80c97d342320d09d5b7b604\"" Jun 25 16:27:50.109686 containerd[1277]: time="2024-06-25T16:27:50.078997929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ctxnk,Uid:fd5671b3-3dd7-4434-840a-28ff60f28b4d,Namespace:kube-system,Attempt:1,} returns sandbox id \"685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1\"" Jun 25 16:27:50.126614 kubelet[2274]: E0625 16:27:50.123197 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:50.143871 containerd[1277]: time="2024-06-25T16:27:50.142237130Z" level=info msg="CreateContainer within sandbox \"685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:27:50.178640 containerd[1277]: time="2024-06-25T16:27:50.178575507Z" level=info msg="CreateContainer within sandbox \"685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83b69c54b65279e44133599b8fc40149e33d297281f2dc3e52d2e247a66c08f9\"" Jun 25 16:27:50.182109 containerd[1277]: time="2024-06-25T16:27:50.179613614Z" level=info msg="StartContainer for \"83b69c54b65279e44133599b8fc40149e33d297281f2dc3e52d2e247a66c08f9\"" Jun 25 16:27:50.200730 systemd[1]: Started cri-containerd-50825dde97a04085f5742f6ecfb3f88d8572cdfbf80c97d342320d09d5b7b604.scope - libcontainer container 50825dde97a04085f5742f6ecfb3f88d8572cdfbf80c97d342320d09d5b7b604. Jun 25 16:27:50.238000 audit: BPF prog-id=152 op=LOAD Jun 25 16:27:50.239000 audit: BPF prog-id=153 op=LOAD Jun 25 16:27:50.239000 audit[3918]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b9988 a2=78 a3=0 items=0 ppid=3774 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:50.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530383235646465393761303430383566353734326636656366623366 Jun 25 16:27:50.239000 audit: BPF prog-id=154 op=LOAD Jun 25 16:27:50.239000 audit[3918]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b9720 a2=78 a3=0 items=0 ppid=3774 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:50.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530383235646465393761303430383566353734326636656366623366 Jun 25 16:27:50.239000 audit: BPF prog-id=154 op=UNLOAD Jun 25 16:27:50.239000 audit: BPF prog-id=153 op=UNLOAD Jun 25 16:27:50.240000 audit: BPF prog-id=155 op=LOAD Jun 25 16:27:50.240000 audit[3918]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b9be0 a2=78 a3=0 items=0 ppid=3774 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:50.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530383235646465393761303430383566353734326636656366623366 Jun 25 16:27:50.317370 systemd[1]: Started cri-containerd-83b69c54b65279e44133599b8fc40149e33d297281f2dc3e52d2e247a66c08f9.scope - libcontainer container 83b69c54b65279e44133599b8fc40149e33d297281f2dc3e52d2e247a66c08f9. Jun 25 16:27:50.333101 containerd[1277]: time="2024-06-25T16:27:50.333010141Z" level=info msg="StartContainer for \"50825dde97a04085f5742f6ecfb3f88d8572cdfbf80c97d342320d09d5b7b604\" returns successfully" Jun 25 16:27:50.343000 audit: BPF prog-id=156 op=LOAD Jun 25 16:27:50.346000 audit: BPF prog-id=157 op=LOAD Jun 25 16:27:50.346000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3874 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:50.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833623639633534623635323739653434313333353939623866633430 Jun 25 16:27:50.350000 audit: BPF prog-id=158 op=LOAD Jun 25 16:27:50.350000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3874 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:50.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833623639633534623635323739653434313333353939623866633430 Jun 25 16:27:50.351000 audit: BPF prog-id=158 op=UNLOAD Jun 25 16:27:50.351000 audit: BPF prog-id=157 op=UNLOAD Jun 25 16:27:50.351000 audit: BPF prog-id=159 op=LOAD Jun 25 16:27:50.351000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3874 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:50.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833623639633534623635323739653434313333353939623866633430 Jun 25 16:27:50.432699 containerd[1277]: time="2024-06-25T16:27:50.432621000Z" level=info msg="StartContainer for \"83b69c54b65279e44133599b8fc40149e33d297281f2dc3e52d2e247a66c08f9\" returns successfully" Jun 25 16:27:50.474426 systemd-networkd[1091]: calie4a717f2c16: Gained IPv6LL Jun 25 16:27:50.810624 kubelet[2274]: E0625 16:27:50.810475 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:50.836467 kubelet[2274]: E0625 16:27:50.836424 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:50.927955 kubelet[2274]: I0625 16:27:50.924067 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ctxnk" podStartSLOduration=37.922666398 podStartE2EDuration="37.922666398s" podCreationTimestamp="2024-06-25 16:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:27:50.873933006 +0000 UTC m=+53.750536244" watchObservedRunningTime="2024-06-25 16:27:50.922666398 +0000 UTC m=+53.799269621" Jun 25 16:27:50.963000 audit[3990]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:50.963000 audit[3990]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd76840300 a2=0 a3=7ffd768402ec items=0 ppid=2449 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:50.963000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:50.982000 audit[3990]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=3990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:50.982000 audit[3990]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd76840300 a2=0 a3=0 items=0 ppid=2449 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:50.982000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:51.001000 audit[3992]: NETFILTER_CFG table=filter:107 family=2 entries=11 op=nft_register_rule pid=3992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:51.001000 audit[3992]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffecf501c90 a2=0 a3=7ffecf501c7c items=0 ppid=2449 pid=3992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:51.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:51.005000 audit[3992]: NETFILTER_CFG table=nat:108 family=2 entries=35 op=nft_register_chain pid=3992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:51.005000 audit[3992]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffecf501c90 a2=0 a3=7ffecf501c7c items=0 ppid=2449 pid=3992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:51.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:51.051502 systemd-networkd[1091]: calid223a7ea0a7: Gained IPv6LL Jun 25 16:27:51.178419 systemd-networkd[1091]: calib64ac3ce55c: Gained IPv6LL Jun 25 16:27:51.508766 containerd[1277]: time="2024-06-25T16:27:51.508249474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:51.511182 containerd[1277]: time="2024-06-25T16:27:51.510996574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:27:51.512541 containerd[1277]: time="2024-06-25T16:27:51.512468318Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:51.518113 containerd[1277]: time="2024-06-25T16:27:51.518018711Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:51.523110 containerd[1277]: time="2024-06-25T16:27:51.523006401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:51.525263 containerd[1277]: time="2024-06-25T16:27:51.524916889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.05699039s" Jun 25 16:27:51.525502 containerd[1277]: time="2024-06-25T16:27:51.525270629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:27:51.537543 containerd[1277]: time="2024-06-25T16:27:51.536799519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:27:51.583226 containerd[1277]: time="2024-06-25T16:27:51.583011163Z" level=info msg="CreateContainer within sandbox \"ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:27:51.621854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610344607.mount: Deactivated successfully. Jun 25 16:27:51.631233 containerd[1277]: time="2024-06-25T16:27:51.631141808Z" level=info msg="CreateContainer within sandbox \"ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"10a75cd7b0a4b7389e3c5267c0473d86c3067ead7d407a0b2443e34c862eba2a\"" Jun 25 16:27:51.638598 containerd[1277]: time="2024-06-25T16:27:51.638469827Z" level=info msg="StartContainer for \"10a75cd7b0a4b7389e3c5267c0473d86c3067ead7d407a0b2443e34c862eba2a\"" Jun 25 16:27:51.746905 systemd[1]: Started cri-containerd-10a75cd7b0a4b7389e3c5267c0473d86c3067ead7d407a0b2443e34c862eba2a.scope - libcontainer container 10a75cd7b0a4b7389e3c5267c0473d86c3067ead7d407a0b2443e34c862eba2a. Jun 25 16:27:51.791000 audit: BPF prog-id=160 op=LOAD Jun 25 16:27:51.793682 kernel: kauditd_printk_skb: 97 callbacks suppressed Jun 25 16:27:51.793776 kernel: audit: type=1334 audit(1719332871.791:586): prog-id=160 op=LOAD Jun 25 16:27:51.797000 audit: BPF prog-id=161 op=LOAD Jun 25 16:27:51.797000 audit[4004]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3565 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:51.805431 kernel: audit: type=1334 audit(1719332871.797:587): prog-id=161 op=LOAD Jun 25 16:27:51.805626 kernel: audit: type=1300 audit(1719332871.797:587): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3565 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:51.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613735636437623061346237333839653363353236376330343733 Jun 25 16:27:51.813373 kernel: audit: type=1327 audit(1719332871.797:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613735636437623061346237333839653363353236376330343733 Jun 25 16:27:51.797000 audit: BPF prog-id=162 op=LOAD Jun 25 16:27:51.816057 kernel: audit: type=1334 audit(1719332871.797:588): prog-id=162 op=LOAD Jun 25 16:27:51.797000 audit[4004]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3565 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:51.821386 kernel: audit: type=1300 audit(1719332871.797:588): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3565 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:51.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613735636437623061346237333839653363353236376330343733 Jun 25 16:27:51.829225 kernel: audit: type=1327 audit(1719332871.797:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613735636437623061346237333839653363353236376330343733 Jun 25 16:27:51.797000 audit: BPF prog-id=162 op=UNLOAD Jun 25 16:27:51.797000 audit: BPF prog-id=161 op=UNLOAD Jun 25 16:27:51.832803 kernel: audit: type=1334 audit(1719332871.797:589): prog-id=162 op=UNLOAD Jun 25 16:27:51.832960 kernel: audit: type=1334 audit(1719332871.797:590): prog-id=161 op=UNLOAD Jun 25 16:27:51.833412 kernel: audit: type=1334 audit(1719332871.797:591): prog-id=163 op=LOAD Jun 25 16:27:51.797000 audit: BPF prog-id=163 op=LOAD Jun 25 16:27:51.797000 audit[4004]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3565 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:51.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613735636437623061346237333839653363353236376330343733 Jun 25 16:27:51.866738 kubelet[2274]: E0625 16:27:51.866683 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:51.918228 kubelet[2274]: E0625 16:27:51.917626 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:51.962959 kubelet[2274]: I0625 16:27:51.962666 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8vxgt" podStartSLOduration=38.96262963 podStartE2EDuration="38.96262963s" podCreationTimestamp="2024-06-25 16:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:27:50.928570728 +0000 UTC m=+53.805173952" watchObservedRunningTime="2024-06-25 16:27:51.96262963 +0000 UTC m=+54.839232977" Jun 25 16:27:51.973363 containerd[1277]: time="2024-06-25T16:27:51.973273406Z" level=info msg="StartContainer for \"10a75cd7b0a4b7389e3c5267c0473d86c3067ead7d407a0b2443e34c862eba2a\" returns successfully" Jun 25 16:27:52.033753 kubelet[2274]: I0625 16:27:52.033661 2274 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:27:52.051371 kubelet[2274]: E0625 16:27:52.048093 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:52.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-161.35.235.79:22-139.178.89.65:34004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.058935 systemd[1]: Started sshd@9-161.35.235.79:22-139.178.89.65:34004.service - OpenSSH per-connection server daemon (139.178.89.65:34004). Jun 25 16:27:52.102000 audit[4031]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:52.102000 audit[4031]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd60870da0 a2=0 a3=7ffd60870d8c items=0 ppid=2449 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:52.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:52.158000 audit[4031]: NETFILTER_CFG table=nat:110 family=2 entries=56 op=nft_register_chain pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:27:52.158000 audit[4031]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd60870da0 a2=0 a3=7ffd60870d8c items=0 ppid=2449 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:52.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:27:52.247000 audit[4033]: USER_ACCT pid=4033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.253205 sshd[4033]: Accepted publickey for core from 139.178.89.65 port 34004 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:52.253000 audit[4033]: CRED_ACQ pid=4033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.253000 audit[4033]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb40fb920 a2=3 a3=7f068b7e1480 items=0 ppid=1 pid=4033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:52.253000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:52.255874 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:52.270690 systemd-logind[1266]: New session 10 of user core. Jun 25 16:27:52.272353 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:27:52.312805 systemd[1]: run-containerd-runc-k8s.io-2671c971b06e410e81d9541df40a0d6405d76603e7c2a26114b349cc77065084-runc.kMcR70.mount: Deactivated successfully. Jun 25 16:27:52.319000 audit[4033]: USER_START pid=4033 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.322000 audit[4045]: CRED_ACQ pid=4045 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.476000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:52.476000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e35d00 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:52.476000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:52.482000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=526918 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:52.482000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00143e5d0 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:27:52.482000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:52.654637 sshd[4033]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:52.655000 audit[4033]: USER_END pid=4033 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.655000 audit[4033]: CRED_DISP pid=4033 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.659810 systemd[1]: sshd@9-161.35.235.79:22-139.178.89.65:34004.service: Deactivated successfully. Jun 25 16:27:52.661238 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:27:52.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-161.35.235.79:22-139.178.89.65:34004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.662640 systemd-logind[1266]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:27:52.665214 systemd-logind[1266]: Removed session 10. Jun 25 16:27:52.768909 systemd[1]: run-containerd-runc-k8s.io-2671c971b06e410e81d9541df40a0d6405d76603e7c2a26114b349cc77065084-runc.IdjPmB.mount: Deactivated successfully. Jun 25 16:27:52.871712 kubelet[2274]: E0625 16:27:52.871644 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:52.873868 kubelet[2274]: E0625 16:27:52.873811 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:52.874537 kubelet[2274]: E0625 16:27:52.874492 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:27:52.923925 systemd[1]: run-containerd-runc-k8s.io-10a75cd7b0a4b7389e3c5267c0473d86c3067ead7d407a0b2443e34c862eba2a-runc.mqTHCu.mount: Deactivated successfully. Jun 25 16:27:52.934178 kubelet[2274]: I0625 16:27:52.934022 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d96847fc9-dl7wj" podStartSLOduration=27.861339074 podStartE2EDuration="31.933997732s" podCreationTimestamp="2024-06-25 16:27:21 +0000 UTC" firstStartedPulling="2024-06-25 16:27:47.463407038 +0000 UTC m=+50.340010264" lastFinishedPulling="2024-06-25 16:27:51.536065635 +0000 UTC m=+54.412668922" observedRunningTime="2024-06-25 16:27:52.928594953 +0000 UTC m=+55.805198184" watchObservedRunningTime="2024-06-25 16:27:52.933997732 +0000 UTC m=+55.810600963" Jun 25 16:27:53.182116 containerd[1277]: time="2024-06-25T16:27:53.181893523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:53.186002 containerd[1277]: time="2024-06-25T16:27:53.185915522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:27:53.188079 containerd[1277]: time="2024-06-25T16:27:53.187989704Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:53.192199 containerd[1277]: time="2024-06-25T16:27:53.192127511Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:53.198340 containerd[1277]: time="2024-06-25T16:27:53.198263572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:53.200177 containerd[1277]: time="2024-06-25T16:27:53.199908077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.663030736s" Jun 25 16:27:53.200177 containerd[1277]: time="2024-06-25T16:27:53.200069795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:27:53.206467 containerd[1277]: time="2024-06-25T16:27:53.206383383Z" level=info msg="CreateContainer within sandbox \"f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:27:53.257243 containerd[1277]: time="2024-06-25T16:27:53.257060167Z" level=info msg="CreateContainer within sandbox \"f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ded97dc9bc5b72c552a1829e919c43a5b732bd81117a98959cd3183fb7da3ef8\"" Jun 25 16:27:53.259759 containerd[1277]: time="2024-06-25T16:27:53.259705343Z" level=info msg="StartContainer for \"ded97dc9bc5b72c552a1829e919c43a5b732bd81117a98959cd3183fb7da3ef8\"" Jun 25 16:27:53.356431 systemd[1]: Started cri-containerd-ded97dc9bc5b72c552a1829e919c43a5b732bd81117a98959cd3183fb7da3ef8.scope - libcontainer container ded97dc9bc5b72c552a1829e919c43a5b732bd81117a98959cd3183fb7da3ef8. Jun 25 16:27:53.407000 audit: BPF prog-id=164 op=LOAD Jun 25 16:27:53.407000 audit[4123]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3801 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643937646339626335623732633535326131383239653931396334 Jun 25 16:27:53.407000 audit: BPF prog-id=165 op=LOAD Jun 25 16:27:53.407000 audit[4123]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3801 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643937646339626335623732633535326131383239653931396334 Jun 25 16:27:53.407000 audit: BPF prog-id=165 op=UNLOAD Jun 25 16:27:53.407000 audit: BPF prog-id=164 op=UNLOAD Jun 25 16:27:53.407000 audit: BPF prog-id=166 op=LOAD Jun 25 16:27:53.407000 audit[4123]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3801 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643937646339626335623732633535326131383239653931396334 Jun 25 16:27:53.440686 containerd[1277]: time="2024-06-25T16:27:53.440525904Z" level=info msg="StartContainer for \"ded97dc9bc5b72c552a1829e919c43a5b732bd81117a98959cd3183fb7da3ef8\" returns successfully" Jun 25 16:27:53.443694 containerd[1277]: time="2024-06-25T16:27:53.443552991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:27:53.713000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=526918 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:53.713000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c010d7f590 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:27:53.713000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:27:53.714000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:53.714000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c0101139e0 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:27:53.714000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:27:53.721000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:53.721000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c010113a20 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:27:53.721000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:27:53.722000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=526920 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:53.722000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c010be7a70 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:27:53.722000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:27:53.724000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=526918 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:53.724000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c010be7b60 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:27:53.724000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:27:53.731000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=526914 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:53.731000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c010be7c20 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:27:53.731000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:27:55.086222 containerd[1277]: time="2024-06-25T16:27:55.086158251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:55.089595 containerd[1277]: time="2024-06-25T16:27:55.089503471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:27:55.092568 containerd[1277]: time="2024-06-25T16:27:55.092495674Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:55.095370 containerd[1277]: time="2024-06-25T16:27:55.095312464Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:55.098728 containerd[1277]: time="2024-06-25T16:27:55.098644170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:55.100700 containerd[1277]: time="2024-06-25T16:27:55.100627805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.656922288s" Jun 25 16:27:55.101025 containerd[1277]: time="2024-06-25T16:27:55.100981117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:27:55.111927 containerd[1277]: time="2024-06-25T16:27:55.111856345Z" level=info msg="CreateContainer within sandbox \"f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:27:55.148856 containerd[1277]: time="2024-06-25T16:27:55.148796817Z" level=info msg="CreateContainer within sandbox \"f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fb92206a83fae5b1a65a43331e00575dcfd6b161cd26157034588c8e5e743b2c\"" Jun 25 16:27:55.150217 containerd[1277]: time="2024-06-25T16:27:55.150162207Z" level=info msg="StartContainer for \"fb92206a83fae5b1a65a43331e00575dcfd6b161cd26157034588c8e5e743b2c\"" Jun 25 16:27:55.221395 systemd[1]: Started cri-containerd-fb92206a83fae5b1a65a43331e00575dcfd6b161cd26157034588c8e5e743b2c.scope - libcontainer container fb92206a83fae5b1a65a43331e00575dcfd6b161cd26157034588c8e5e743b2c. Jun 25 16:27:55.227858 systemd[1]: run-containerd-runc-k8s.io-fb92206a83fae5b1a65a43331e00575dcfd6b161cd26157034588c8e5e743b2c-runc.SVtS2N.mount: Deactivated successfully. Jun 25 16:27:55.270000 audit: BPF prog-id=167 op=LOAD Jun 25 16:27:55.270000 audit[4174]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3801 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:55.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662393232303661383366616535623161363561343333333165303035 Jun 25 16:27:55.270000 audit: BPF prog-id=168 op=LOAD Jun 25 16:27:55.270000 audit[4174]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3801 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:55.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662393232303661383366616535623161363561343333333165303035 Jun 25 16:27:55.270000 audit: BPF prog-id=168 op=UNLOAD Jun 25 16:27:55.270000 audit: BPF prog-id=167 op=UNLOAD Jun 25 16:27:55.270000 audit: BPF prog-id=169 op=LOAD Jun 25 16:27:55.270000 audit[4174]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3801 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:55.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662393232303661383366616535623161363561343333333165303035 Jun 25 16:27:55.303665 containerd[1277]: time="2024-06-25T16:27:55.303588190Z" level=info msg="StartContainer for \"fb92206a83fae5b1a65a43331e00575dcfd6b161cd26157034588c8e5e743b2c\" returns successfully" Jun 25 16:27:55.635011 kubelet[2274]: I0625 16:27:55.634953 2274 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:27:55.638153 kubelet[2274]: I0625 16:27:55.638108 2274 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:27:55.930166 kubelet[2274]: I0625 16:27:55.930067 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qrghb" podStartSLOduration=29.682959551 podStartE2EDuration="34.930015074s" podCreationTimestamp="2024-06-25 16:27:21 +0000 UTC" firstStartedPulling="2024-06-25 16:27:49.855795862 +0000 UTC m=+52.732399075" lastFinishedPulling="2024-06-25 16:27:55.102851374 +0000 UTC m=+57.979454598" observedRunningTime="2024-06-25 16:27:55.925309839 +0000 UTC m=+58.801913082" watchObservedRunningTime="2024-06-25 16:27:55.930015074 +0000 UTC m=+58.806618310" Jun 25 16:27:57.398062 containerd[1277]: time="2024-06-25T16:27:57.397956873Z" level=info msg="StopPodSandbox for \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\"" Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.523 [WARNING][4218] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fd5671b3-3dd7-4434-840a-28ff60f28b4d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1", Pod:"coredns-7db6d8ff4d-ctxnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib64ac3ce55c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.524 [INFO][4218] k8s.go 608: Cleaning up netns ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.524 [INFO][4218] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" iface="eth0" netns="" Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.524 [INFO][4218] k8s.go 615: Releasing IP address(es) ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.524 [INFO][4218] utils.go 188: Calico CNI releasing IP address ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.569 [INFO][4226] ipam_plugin.go 411: Releasing address using handleID ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" HandleID="k8s-pod-network.bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.570 [INFO][4226] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.570 [INFO][4226] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.582 [WARNING][4226] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" HandleID="k8s-pod-network.bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.582 [INFO][4226] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" HandleID="k8s-pod-network.bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.586 [INFO][4226] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:57.590954 containerd[1277]: 2024-06-25 16:27:57.588 [INFO][4218] k8s.go 621: Teardown processing complete. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:57.592886 containerd[1277]: time="2024-06-25T16:27:57.591301905Z" level=info msg="TearDown network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\" successfully" Jun 25 16:27:57.592886 containerd[1277]: time="2024-06-25T16:27:57.591360923Z" level=info msg="StopPodSandbox for \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\" returns successfully" Jun 25 16:27:57.593657 containerd[1277]: time="2024-06-25T16:27:57.593607245Z" level=info msg="RemovePodSandbox for \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\"" Jun 25 16:27:57.650906 containerd[1277]: time="2024-06-25T16:27:57.593854187Z" level=info msg="Forcibly stopping sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\"" Jun 25 16:27:57.679861 kernel: kauditd_printk_skb: 65 callbacks suppressed Jun 25 16:27:57.680078 kernel: audit: type=1130 audit(1719332877.672:621): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-161.35.235.79:22-139.178.89.65:35680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:57.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-161.35.235.79:22-139.178.89.65:35680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:57.672926 systemd[1]: Started sshd@10-161.35.235.79:22-139.178.89.65:35680.service - OpenSSH per-connection server daemon (139.178.89.65:35680). Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.795 [WARNING][4245] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fd5671b3-3dd7-4434-840a-28ff60f28b4d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"685933ff8dea6f09aef901b4c5fe989ff0d0e308f96176947febfbad9821faf1", Pod:"coredns-7db6d8ff4d-ctxnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib64ac3ce55c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.796 [INFO][4245] k8s.go 608: Cleaning up netns ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.797 [INFO][4245] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" iface="eth0" netns="" Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.797 [INFO][4245] k8s.go 615: Releasing IP address(es) ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.797 [INFO][4245] utils.go 188: Calico CNI releasing IP address ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.914 [INFO][4253] ipam_plugin.go 411: Releasing address using handleID ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" HandleID="k8s-pod-network.bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.914 [INFO][4253] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.914 [INFO][4253] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.929 [WARNING][4253] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" HandleID="k8s-pod-network.bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.929 [INFO][4253] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" HandleID="k8s-pod-network.bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--ctxnk-eth0" Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.934 [INFO][4253] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:57.939933 containerd[1277]: 2024-06-25 16:27:57.936 [INFO][4245] k8s.go 621: Teardown processing complete. ContainerID="bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b" Jun 25 16:27:57.939933 containerd[1277]: time="2024-06-25T16:27:57.939506835Z" level=info msg="TearDown network for sandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\" successfully" Jun 25 16:27:57.979252 kernel: audit: type=1101 audit(1719332877.974:622): pid=4246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:57.974000 audit[4246]: USER_ACCT pid=4246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:57.980124 sshd[4246]: Accepted publickey for core from 139.178.89.65 port 35680 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:57.979000 audit[4246]: CRED_ACQ pid=4246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:57.984904 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:57.988246 kernel: audit: type=1103 audit(1719332877.979:623): pid=4246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:57.988350 kernel: audit: type=1006 audit(1719332877.979:624): pid=4246 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:27:58.003479 kernel: audit: type=1300 audit(1719332877.979:624): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2276d610 a2=3 a3=7f8a28fed480 items=0 ppid=1 pid=4246 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:58.003587 kernel: audit: type=1327 audit(1719332877.979:624): proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:57.979000 audit[4246]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2276d610 a2=3 a3=7f8a28fed480 items=0 ppid=1 pid=4246 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:57.979000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:58.005133 systemd-logind[1266]: New session 11 of user core. Jun 25 16:27:58.008378 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:27:58.021000 audit[4246]: USER_START pid=4246 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.029479 kernel: audit: type=1105 audit(1719332878.021:625): pid=4246 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.028000 audit[4260]: CRED_ACQ pid=4260 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.035654 kernel: audit: type=1103 audit(1719332878.028:626): pid=4260 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.105117 containerd[1277]: time="2024-06-25T16:27:58.105016226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:27:58.116062 containerd[1277]: time="2024-06-25T16:27:58.115979354Z" level=info msg="RemovePodSandbox \"bdd8408b971f22d4c6556218c7f710e8b4c18caea82416389c141c941be1395b\" returns successfully" Jun 25 16:27:58.117532 containerd[1277]: time="2024-06-25T16:27:58.117487766Z" level=info msg="StopPodSandbox for \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\"" Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.212 [WARNING][4279] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66a2358e-62b7-4455-bce2-ea313197d5cb", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700", Pod:"csi-node-driver-qrghb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid223a7ea0a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.213 [INFO][4279] k8s.go 608: Cleaning up netns ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.213 [INFO][4279] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" iface="eth0" netns="" Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.213 [INFO][4279] k8s.go 615: Releasing IP address(es) ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.213 [INFO][4279] utils.go 188: Calico CNI releasing IP address ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.304 [INFO][4289] ipam_plugin.go 411: Releasing address using handleID ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" HandleID="k8s-pod-network.fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.307 [INFO][4289] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.307 [INFO][4289] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.319 [WARNING][4289] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" HandleID="k8s-pod-network.fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.319 [INFO][4289] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" HandleID="k8s-pod-network.fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.323 [INFO][4289] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:58.329799 containerd[1277]: 2024-06-25 16:27:58.327 [INFO][4279] k8s.go 621: Teardown processing complete. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:58.330810 containerd[1277]: time="2024-06-25T16:27:58.330747809Z" level=info msg="TearDown network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\" successfully" Jun 25 16:27:58.330936 containerd[1277]: time="2024-06-25T16:27:58.330912347Z" level=info msg="StopPodSandbox for \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\" returns successfully" Jun 25 16:27:58.335941 containerd[1277]: time="2024-06-25T16:27:58.335877355Z" level=info msg="RemovePodSandbox for \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\"" Jun 25 16:27:58.336464 containerd[1277]: time="2024-06-25T16:27:58.336372313Z" level=info msg="Forcibly stopping sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\"" Jun 25 16:27:58.535573 sshd[4246]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:58.545366 kernel: audit: type=1106 audit(1719332878.537:627): pid=4246 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.537000 audit[4246]: USER_END pid=4246 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.556106 kernel: audit: type=1104 audit(1719332878.537:628): pid=4246 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.537000 audit[4246]: CRED_DISP pid=4246 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.558558 systemd[1]: sshd@10-161.35.235.79:22-139.178.89.65:35680.service: Deactivated successfully. Jun 25 16:27:58.559980 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:27:58.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-161.35.235.79:22-139.178.89.65:35680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:58.567400 systemd-logind[1266]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:27:58.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-161.35.235.79:22-139.178.89.65:35684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:58.571991 systemd[1]: Started sshd@11-161.35.235.79:22-139.178.89.65:35684.service - OpenSSH per-connection server daemon (139.178.89.65:35684). Jun 25 16:27:58.585652 systemd-logind[1266]: Removed session 11. Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.447 [WARNING][4307] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66a2358e-62b7-4455-bce2-ea313197d5cb", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"f1ac9f1c3f8456fab9d2f1a3a0ff4e8fbb2f16071088b809bd9293d7bff5a700", Pod:"csi-node-driver-qrghb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid223a7ea0a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.447 [INFO][4307] k8s.go 608: Cleaning up netns ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.448 [INFO][4307] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" iface="eth0" netns="" Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.448 [INFO][4307] k8s.go 615: Releasing IP address(es) ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.448 [INFO][4307] utils.go 188: Calico CNI releasing IP address ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.541 [INFO][4313] ipam_plugin.go 411: Releasing address using handleID ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" HandleID="k8s-pod-network.fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.544 [INFO][4313] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.544 [INFO][4313] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.559 [WARNING][4313] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" HandleID="k8s-pod-network.fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.559 [INFO][4313] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" HandleID="k8s-pod-network.fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-csi--node--driver--qrghb-eth0" Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.572 [INFO][4313] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:58.598422 containerd[1277]: 2024-06-25 16:27:58.596 [INFO][4307] k8s.go 621: Teardown processing complete. ContainerID="fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49" Jun 25 16:27:58.600304 containerd[1277]: time="2024-06-25T16:27:58.599606556Z" level=info msg="TearDown network for sandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\" successfully" Jun 25 16:27:58.610743 containerd[1277]: time="2024-06-25T16:27:58.610562137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:27:58.610743 containerd[1277]: time="2024-06-25T16:27:58.610654105Z" level=info msg="RemovePodSandbox \"fec31157358f355a0ec0490cdef2eb578985359b42138f9e6b620be012d20c49\" returns successfully" Jun 25 16:27:58.612160 containerd[1277]: time="2024-06-25T16:27:58.612058837Z" level=info msg="StopPodSandbox for \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\"" Jun 25 16:27:58.660000 audit[4321]: USER_ACCT pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.661000 audit[4321]: CRED_ACQ pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.661000 audit[4321]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0cd3dd30 a2=3 a3=7f5d66d2d480 items=0 ppid=1 pid=4321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:58.661000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:58.663394 sshd[4321]: Accepted publickey for core from 139.178.89.65 port 35684 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:58.663796 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:58.674417 systemd-logind[1266]: New session 12 of user core. Jun 25 16:27:58.680362 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:27:58.695000 audit[4321]: USER_START pid=4321 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.698000 audit[4341]: CRED_ACQ pid=4341 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.717 [WARNING][4336] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0", GenerateName:"calico-kube-controllers-d96847fc9-", Namespace:"calico-system", SelfLink:"", UID:"9e379c3e-11d3-4f6c-94e5-9fc57783d470", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d96847fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c", Pod:"calico-kube-controllers-d96847fc9-dl7wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali405fa104dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.718 [INFO][4336] k8s.go 608: Cleaning up netns ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.718 [INFO][4336] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" iface="eth0" netns="" Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.718 [INFO][4336] k8s.go 615: Releasing IP address(es) ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.718 [INFO][4336] utils.go 188: Calico CNI releasing IP address ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.786 [INFO][4343] ipam_plugin.go 411: Releasing address using handleID ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" HandleID="k8s-pod-network.1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.787 [INFO][4343] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.787 [INFO][4343] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.803 [WARNING][4343] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" HandleID="k8s-pod-network.1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.803 [INFO][4343] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" HandleID="k8s-pod-network.1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.811 [INFO][4343] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:58.815926 containerd[1277]: 2024-06-25 16:27:58.813 [INFO][4336] k8s.go 621: Teardown processing complete. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:58.817021 containerd[1277]: time="2024-06-25T16:27:58.816972831Z" level=info msg="TearDown network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\" successfully" Jun 25 16:27:58.817148 containerd[1277]: time="2024-06-25T16:27:58.817126436Z" level=info msg="StopPodSandbox for \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\" returns successfully" Jun 25 16:27:58.819947 containerd[1277]: time="2024-06-25T16:27:58.819775449Z" level=info msg="RemovePodSandbox for \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\"" Jun 25 16:27:58.819947 containerd[1277]: time="2024-06-25T16:27:58.819873337Z" level=info msg="Forcibly stopping sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\"" Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:58.974 [WARNING][4365] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0", GenerateName:"calico-kube-controllers-d96847fc9-", Namespace:"calico-system", SelfLink:"", UID:"9e379c3e-11d3-4f6c-94e5-9fc57783d470", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d96847fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"ab8d803263eed093c0f9333edf6e98eb08c668a71d0d48290169aa77852a3c6c", Pod:"calico-kube-controllers-d96847fc9-dl7wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali405fa104dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:58.974 [INFO][4365] k8s.go 608: Cleaning up netns ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:58.974 [INFO][4365] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" iface="eth0" netns="" Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:58.975 [INFO][4365] k8s.go 615: Releasing IP address(es) ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:58.975 [INFO][4365] utils.go 188: Calico CNI releasing IP address ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:59.035 [INFO][4372] ipam_plugin.go 411: Releasing address using handleID ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" HandleID="k8s-pod-network.1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:59.035 [INFO][4372] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:59.035 [INFO][4372] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:59.051 [WARNING][4372] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" HandleID="k8s-pod-network.1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:59.051 [INFO][4372] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" HandleID="k8s-pod-network.1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--kube--controllers--d96847fc9--dl7wj-eth0" Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:59.058 [INFO][4372] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:59.062867 containerd[1277]: 2024-06-25 16:27:59.060 [INFO][4365] k8s.go 621: Teardown processing complete. ContainerID="1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca" Jun 25 16:27:59.064508 containerd[1277]: time="2024-06-25T16:27:59.063957677Z" level=info msg="TearDown network for sandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\" successfully" Jun 25 16:27:59.071739 containerd[1277]: time="2024-06-25T16:27:59.071642515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:27:59.072092 containerd[1277]: time="2024-06-25T16:27:59.072021496Z" level=info msg="RemovePodSandbox \"1520eb970f6f9692083308a598ad3e90a9eb839f9f0435326ac30faea6f3cbca\" returns successfully" Jun 25 16:27:59.073122 containerd[1277]: time="2024-06-25T16:27:59.072973161Z" level=info msg="StopPodSandbox for \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\"" Jun 25 16:27:59.126981 sshd[4321]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:59.144000 audit[4321]: USER_END pid=4321 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:59.144000 audit[4321]: CRED_DISP pid=4321 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:59.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-161.35.235.79:22-139.178.89.65:35692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:59.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-161.35.235.79:22-139.178.89.65:35684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:59.152542 systemd[1]: Started sshd@12-161.35.235.79:22-139.178.89.65:35692.service - OpenSSH per-connection server daemon (139.178.89.65:35692). Jun 25 16:27:59.154018 systemd[1]: sshd@11-161.35.235.79:22-139.178.89.65:35684.service: Deactivated successfully. Jun 25 16:27:59.156162 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:27:59.162255 systemd-logind[1266]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:27:59.166141 systemd-logind[1266]: Removed session 12. Jun 25 16:27:59.252000 audit[4396]: USER_ACCT pid=4396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:59.254335 sshd[4396]: Accepted publickey for core from 139.178.89.65 port 35692 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:59.255000 audit[4396]: CRED_ACQ pid=4396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:59.255000 audit[4396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3f5d29c0 a2=3 a3=7fd79f3bd480 items=0 ppid=1 pid=4396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:59.255000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:59.256425 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:59.266993 systemd-logind[1266]: New session 13 of user core. Jun 25 16:27:59.270335 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:27:59.279000 audit[4396]: USER_START pid=4396 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:59.282000 audit[4401]: CRED_ACQ pid=4401 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.283 [WARNING][4391] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"311c726d-a38f-4e12-bc55-98e4f6f1ab2a", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0", Pod:"coredns-7db6d8ff4d-8vxgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4a717f2c16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.283 [INFO][4391] k8s.go 608: Cleaning up netns ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.284 [INFO][4391] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" iface="eth0" netns="" Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.284 [INFO][4391] k8s.go 615: Releasing IP address(es) ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.284 [INFO][4391] utils.go 188: Calico CNI releasing IP address ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.351 [INFO][4402] ipam_plugin.go 411: Releasing address using handleID ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" HandleID="k8s-pod-network.88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.351 [INFO][4402] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.351 [INFO][4402] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.375 [WARNING][4402] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" HandleID="k8s-pod-network.88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.375 [INFO][4402] ipam_plugin.go 439: Releasing address using workloadID ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" HandleID="k8s-pod-network.88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.394 [INFO][4402] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:59.400219 containerd[1277]: 2024-06-25 16:27:59.397 [INFO][4391] k8s.go 621: Teardown processing complete. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:59.401354 containerd[1277]: time="2024-06-25T16:27:59.401146022Z" level=info msg="TearDown network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\" successfully" Jun 25 16:27:59.401533 containerd[1277]: time="2024-06-25T16:27:59.401503614Z" level=info msg="StopPodSandbox for \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\" returns successfully" Jun 25 16:27:59.404782 containerd[1277]: time="2024-06-25T16:27:59.404729028Z" level=info msg="RemovePodSandbox for \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\"" Jun 25 16:27:59.405188 containerd[1277]: time="2024-06-25T16:27:59.405016029Z" level=info msg="Forcibly stopping sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\"" Jun 25 16:27:59.592834 sshd[4396]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:59.594000 audit[4396]: USER_END pid=4396 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:59.594000 audit[4396]: CRED_DISP pid=4396 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:59.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-161.35.235.79:22-139.178.89.65:35692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:59.598534 systemd[1]: sshd@12-161.35.235.79:22-139.178.89.65:35692.service: Deactivated successfully. Jun 25 16:27:59.599504 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:27:59.605014 systemd-logind[1266]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:27:59.607646 systemd-logind[1266]: Removed session 13. Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.497 [WARNING][4427] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"311c726d-a38f-4e12-bc55-98e4f6f1ab2a", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"f2f126f11c65e5260576a652a4372c5d8864b4887e56cc0ab16819b2829556d0", Pod:"coredns-7db6d8ff4d-8vxgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4a717f2c16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.498 [INFO][4427] k8s.go 608: Cleaning up netns ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.498 [INFO][4427] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" iface="eth0" netns="" Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.498 [INFO][4427] k8s.go 615: Releasing IP address(es) ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.498 [INFO][4427] utils.go 188: Calico CNI releasing IP address ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.616 [INFO][4433] ipam_plugin.go 411: Releasing address using handleID ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" HandleID="k8s-pod-network.88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.616 [INFO][4433] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.616 [INFO][4433] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.628 [WARNING][4433] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" HandleID="k8s-pod-network.88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.628 [INFO][4433] ipam_plugin.go 439: Releasing address using workloadID ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" HandleID="k8s-pod-network.88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-coredns--7db6d8ff4d--8vxgt-eth0" Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.632 [INFO][4433] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:27:59.638126 containerd[1277]: 2024-06-25 16:27:59.635 [INFO][4427] k8s.go 621: Teardown processing complete. ContainerID="88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a" Jun 25 16:27:59.638126 containerd[1277]: time="2024-06-25T16:27:59.637843784Z" level=info msg="TearDown network for sandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\" successfully" Jun 25 16:27:59.642631 containerd[1277]: time="2024-06-25T16:27:59.642523133Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:27:59.642835 containerd[1277]: time="2024-06-25T16:27:59.642693517Z" level=info msg="RemovePodSandbox \"88161ca4f7dd34a03675d7dc7327fdf65d44cc85b4aa7e41b6759046fe59fc3a\" returns successfully" Jun 25 16:28:04.234248 systemd[1]: run-containerd-runc-k8s.io-10a75cd7b0a4b7389e3c5267c0473d86c3067ead7d407a0b2443e34c862eba2a-runc.PTvHX6.mount: Deactivated successfully. Jun 25 16:28:04.628468 systemd[1]: Started sshd@13-161.35.235.79:22-139.178.89.65:35694.service - OpenSSH per-connection server daemon (139.178.89.65:35694). Jun 25 16:28:04.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-161.35.235.79:22-139.178.89.65:35694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:04.635858 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:28:04.636070 kernel: audit: type=1130 audit(1719332884.629:648): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-161.35.235.79:22-139.178.89.65:35694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:04.693000 audit[4478]: USER_ACCT pid=4478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.700732 sshd[4478]: Accepted publickey for core from 139.178.89.65 port 35694 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:04.701410 kernel: audit: type=1101 audit(1719332884.693:649): pid=4478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.702000 audit[4478]: CRED_ACQ pid=4478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.704722 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:04.709195 kernel: audit: type=1103 audit(1719332884.702:650): pid=4478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.715078 kernel: audit: type=1006 audit(1719332884.702:651): pid=4478 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 16:28:04.702000 audit[4478]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd862e7ca0 a2=3 a3=7f4f97b2a480 items=0 ppid=1 pid=4478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:04.723010 kernel: audit: type=1300 audit(1719332884.702:651): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd862e7ca0 a2=3 a3=7f4f97b2a480 items=0 ppid=1 pid=4478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:04.723175 kernel: audit: type=1327 audit(1719332884.702:651): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:04.702000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:04.725910 systemd-logind[1266]: New session 14 of user core. Jun 25 16:28:04.730406 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:28:04.740000 audit[4478]: USER_START pid=4478 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.748288 kernel: audit: type=1105 audit(1719332884.740:652): pid=4478 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.742000 audit[4480]: CRED_ACQ pid=4480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.754090 kernel: audit: type=1103 audit(1719332884.742:653): pid=4480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.923129 sshd[4478]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:04.925000 audit[4478]: USER_END pid=4478 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.933071 kernel: audit: type=1106 audit(1719332884.925:654): pid=4478 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.925000 audit[4478]: CRED_DISP pid=4478 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.934827 systemd[1]: sshd@13-161.35.235.79:22-139.178.89.65:35694.service: Deactivated successfully. Jun 25 16:28:04.936507 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:28:04.940150 kernel: audit: type=1104 audit(1719332884.925:655): pid=4478 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:04.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-161.35.235.79:22-139.178.89.65:35694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:04.941746 systemd-logind[1266]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:28:04.943393 systemd-logind[1266]: Removed session 14. Jun 25 16:28:09.943339 systemd[1]: Started sshd@14-161.35.235.79:22-139.178.89.65:47914.service - OpenSSH per-connection server daemon (139.178.89.65:47914). Jun 25 16:28:09.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-161.35.235.79:22-139.178.89.65:47914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:09.945674 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:09.945790 kernel: audit: type=1130 audit(1719332889.943:657): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-161.35.235.79:22-139.178.89.65:47914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:09.999000 audit[4492]: USER_ACCT pid=4492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.000930 sshd[4492]: Accepted publickey for core from 139.178.89.65 port 47914 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:10.008191 kernel: audit: type=1101 audit(1719332889.999:658): pid=4492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.008601 kernel: audit: type=1103 audit(1719332890.006:659): pid=4492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.006000 audit[4492]: CRED_ACQ pid=4492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.009860 sshd[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:10.021086 kernel: audit: type=1006 audit(1719332890.008:660): pid=4492 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:28:10.008000 audit[4492]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe99dc5460 a2=3 a3=7fe35a882480 items=0 ppid=1 pid=4492 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:10.029139 kernel: audit: type=1300 audit(1719332890.008:660): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe99dc5460 a2=3 a3=7fe35a882480 items=0 ppid=1 pid=4492 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:10.008000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:10.032150 kernel: audit: type=1327 audit(1719332890.008:660): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:10.035185 systemd-logind[1266]: New session 15 of user core. Jun 25 16:28:10.040587 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:28:10.051000 audit[4492]: USER_START pid=4492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.059179 kernel: audit: type=1105 audit(1719332890.051:661): pid=4492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.061000 audit[4494]: CRED_ACQ pid=4494 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.070359 kernel: audit: type=1103 audit(1719332890.061:662): pid=4494 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.238587 sshd[4492]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:10.239000 audit[4492]: USER_END pid=4492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.248135 kernel: audit: type=1106 audit(1719332890.239:663): pid=4492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.239000 audit[4492]: CRED_DISP pid=4492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.249473 systemd[1]: sshd@14-161.35.235.79:22-139.178.89.65:47914.service: Deactivated successfully. Jun 25 16:28:10.250808 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:28:10.256085 kernel: audit: type=1104 audit(1719332890.239:664): pid=4492 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:10.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-161.35.235.79:22-139.178.89.65:47914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:10.255285 systemd-logind[1266]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:28:10.257899 systemd-logind[1266]: Removed session 15. Jun 25 16:28:10.267000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:10.267000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00212cc80 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:28:10.267000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:10.271000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:10.271000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00212cca0 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:28:10.271000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:10.278000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:10.278000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:10.278000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00212cea0 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:28:10.278000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:10.278000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c001cafb40 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:28:10.278000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:12.544853 systemd[1]: run-containerd-runc-k8s.io-10a75cd7b0a4b7389e3c5267c0473d86c3067ead7d407a0b2443e34c862eba2a-runc.cpjrZr.mount: Deactivated successfully. Jun 25 16:28:15.265853 systemd[1]: Started sshd@15-161.35.235.79:22-139.178.89.65:47926.service - OpenSSH per-connection server daemon (139.178.89.65:47926). Jun 25 16:28:15.275510 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:28:15.275560 kernel: audit: type=1130 audit(1719332895.266:670): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-161.35.235.79:22-139.178.89.65:47926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:15.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-161.35.235.79:22-139.178.89.65:47926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:15.322000 audit[4531]: USER_ACCT pid=4531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.324517 sshd[4531]: Accepted publickey for core from 139.178.89.65 port 47926 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:15.331118 kernel: audit: type=1101 audit(1719332895.322:671): pid=4531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.329000 audit[4531]: CRED_ACQ pid=4531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.332175 sshd[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:15.341019 kernel: audit: type=1103 audit(1719332895.329:672): pid=4531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.341203 kernel: audit: type=1006 audit(1719332895.329:673): pid=4531 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:28:15.344561 kernel: audit: type=1300 audit(1719332895.329:673): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed0757d40 a2=3 a3=7f71edb7e480 items=0 ppid=1 pid=4531 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:15.329000 audit[4531]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed0757d40 a2=3 a3=7f71edb7e480 items=0 ppid=1 pid=4531 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:15.346139 systemd-logind[1266]: New session 16 of user core. Jun 25 16:28:15.353937 kernel: audit: type=1327 audit(1719332895.329:673): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:15.329000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:15.352777 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:28:15.363000 audit[4531]: USER_START pid=4531 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.372168 kernel: audit: type=1105 audit(1719332895.363:674): pid=4531 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.372328 kernel: audit: type=1103 audit(1719332895.370:675): pid=4533 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.370000 audit[4533]: CRED_ACQ pid=4533 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.565461 sshd[4531]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:15.568000 audit[4531]: USER_END pid=4531 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.572313 systemd[1]: sshd@15-161.35.235.79:22-139.178.89.65:47926.service: Deactivated successfully. Jun 25 16:28:15.573636 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:28:15.577293 kernel: audit: type=1106 audit(1719332895.568:676): pid=4531 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.576709 systemd-logind[1266]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:28:15.568000 audit[4531]: CRED_DISP pid=4531 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.578828 systemd-logind[1266]: Removed session 16. Jun 25 16:28:15.585136 kernel: audit: type=1104 audit(1719332895.568:677): pid=4531 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:15.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-161.35.235.79:22-139.178.89.65:47926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:16.412877 kubelet[2274]: E0625 16:28:16.412791 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:16.413920 kubelet[2274]: E0625 16:28:16.413742 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:20.595006 systemd[1]: Started sshd@16-161.35.235.79:22-139.178.89.65:46606.service - OpenSSH per-connection server daemon (139.178.89.65:46606). Jun 25 16:28:20.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-161.35.235.79:22-139.178.89.65:46606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:20.597768 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:20.597857 kernel: audit: type=1130 audit(1719332900.595:679): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-161.35.235.79:22-139.178.89.65:46606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:20.689015 sshd[4544]: Accepted publickey for core from 139.178.89.65 port 46606 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:20.686000 audit[4544]: USER_ACCT pid=4544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.690986 sshd[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:20.696245 kernel: audit: type=1101 audit(1719332900.686:680): pid=4544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.700098 kernel: audit: type=1103 audit(1719332900.688:681): pid=4544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.688000 audit[4544]: CRED_ACQ pid=4544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.703669 kernel: audit: type=1006 audit(1719332900.688:682): pid=4544 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:28:20.703122 systemd-logind[1266]: New session 17 of user core. Jun 25 16:28:20.709496 kernel: audit: type=1300 audit(1719332900.688:682): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff283381f0 a2=3 a3=7f0c4924c480 items=0 ppid=1 pid=4544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.688000 audit[4544]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff283381f0 a2=3 a3=7f0c4924c480 items=0 ppid=1 pid=4544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:20.708524 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:28:20.718110 kernel: audit: type=1327 audit(1719332900.688:682): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:20.688000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:20.731158 kernel: audit: type=1105 audit(1719332900.723:683): pid=4544 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.723000 audit[4544]: USER_START pid=4544 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.726000 audit[4546]: CRED_ACQ pid=4546 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.737514 kernel: audit: type=1103 audit(1719332900.726:684): pid=4546 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.950511 sshd[4544]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:20.951000 audit[4544]: USER_END pid=4544 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.959072 kernel: audit: type=1106 audit(1719332900.951:685): pid=4544 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.953000 audit[4544]: CRED_DISP pid=4544 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.967078 kernel: audit: type=1104 audit(1719332900.953:686): pid=4544 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:20.969401 systemd[1]: sshd@16-161.35.235.79:22-139.178.89.65:46606.service: Deactivated successfully. Jun 25 16:28:20.971183 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:28:20.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-161.35.235.79:22-139.178.89.65:46606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:20.973332 systemd-logind[1266]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:28:20.981108 systemd[1]: Started sshd@17-161.35.235.79:22-139.178.89.65:46620.service - OpenSSH per-connection server daemon (139.178.89.65:46620). Jun 25 16:28:20.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-161.35.235.79:22-139.178.89.65:46620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:20.984652 systemd-logind[1266]: Removed session 17. Jun 25 16:28:21.034000 audit[4556]: USER_ACCT pid=4556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:21.036382 sshd[4556]: Accepted publickey for core from 139.178.89.65 port 46620 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:21.037000 audit[4556]: CRED_ACQ pid=4556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:21.037000 audit[4556]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdef5eb690 a2=3 a3=7fefa6d1e480 items=0 ppid=1 pid=4556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:21.037000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:21.038805 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:21.047123 systemd-logind[1266]: New session 18 of user core. Jun 25 16:28:21.052437 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:28:21.059000 audit[4556]: USER_START pid=4556 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:21.065000 audit[4558]: CRED_ACQ pid=4558 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:21.543251 sshd[4556]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:21.544000 audit[4556]: USER_END pid=4556 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:21.546000 audit[4556]: CRED_DISP pid=4556 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:21.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-161.35.235.79:22-139.178.89.65:46632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:21.558831 systemd[1]: Started sshd@18-161.35.235.79:22-139.178.89.65:46632.service - OpenSSH per-connection server daemon (139.178.89.65:46632). Jun 25 16:28:21.560189 systemd[1]: sshd@17-161.35.235.79:22-139.178.89.65:46620.service: Deactivated successfully. Jun 25 16:28:21.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-161.35.235.79:22-139.178.89.65:46620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:21.562094 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:28:21.565636 systemd-logind[1266]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:28:21.570028 systemd-logind[1266]: Removed session 18. Jun 25 16:28:21.620000 audit[4565]: USER_ACCT pid=4565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:21.622118 sshd[4565]: Accepted publickey for core from 139.178.89.65 port 46632 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:21.622000 audit[4565]: CRED_ACQ pid=4565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:21.623000 audit[4565]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc30bc4f20 a2=3 a3=7f9127aab480 items=0 ppid=1 pid=4565 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:21.623000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:21.625495 sshd[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:21.634831 systemd-logind[1266]: New session 19 of user core. Jun 25 16:28:21.639516 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:28:21.647000 audit[4565]: USER_START pid=4565 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:21.650000 audit[4568]: CRED_ACQ pid=4568 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:22.413179 kubelet[2274]: E0625 16:28:22.412564 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:23.411781 kubelet[2274]: E0625 16:28:23.411734 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:24.361506 sshd[4565]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:24.364000 audit[4565]: USER_END pid=4565 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:24.364000 audit[4565]: CRED_DISP pid=4565 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:24.381754 systemd[1]: Started sshd@19-161.35.235.79:22-139.178.89.65:46644.service - OpenSSH per-connection server daemon (139.178.89.65:46644). Jun 25 16:28:24.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-161.35.235.79:22-139.178.89.65:46644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-161.35.235.79:22-139.178.89.65:46632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.384082 systemd[1]: sshd@18-161.35.235.79:22-139.178.89.65:46632.service: Deactivated successfully. Jun 25 16:28:24.385252 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:28:24.389735 systemd-logind[1266]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:28:24.393603 systemd-logind[1266]: Removed session 19. Jun 25 16:28:24.406000 audit[4608]: NETFILTER_CFG table=filter:111 family=2 entries=20 op=nft_register_rule pid=4608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:24.406000 audit[4608]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffce6567cf0 a2=0 a3=7ffce6567cdc items=0 ppid=2449 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:24.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:24.408000 audit[4608]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:24.408000 audit[4608]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce6567cf0 a2=0 a3=0 items=0 ppid=2449 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:24.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:24.453009 sshd[4606]: Accepted publickey for core from 139.178.89.65 port 46644 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:24.451000 audit[4606]: USER_ACCT pid=4606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:24.456683 sshd[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:24.454000 audit[4606]: CRED_ACQ pid=4606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:24.454000 audit[4606]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe717f94a0 a2=3 a3=7efe46294480 items=0 ppid=1 pid=4606 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:24.454000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:24.467185 systemd-logind[1266]: New session 20 of user core. Jun 25 16:28:24.472356 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:28:24.480000 audit[4606]: USER_START pid=4606 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:24.483000 audit[4612]: CRED_ACQ pid=4612 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:24.487000 audit[4611]: NETFILTER_CFG table=filter:113 family=2 entries=32 op=nft_register_rule pid=4611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:24.487000 audit[4611]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff36389300 a2=0 a3=7fff363892ec items=0 ppid=2449 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:24.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:24.489000 audit[4611]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:24.489000 audit[4611]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff36389300 a2=0 a3=0 items=0 ppid=2449 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:24.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:25.266520 sshd[4606]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:25.269000 audit[4606]: USER_END pid=4606 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.270000 audit[4606]: CRED_DISP pid=4606 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.276964 systemd[1]: sshd@19-161.35.235.79:22-139.178.89.65:46644.service: Deactivated successfully. Jun 25 16:28:25.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-161.35.235.79:22-139.178.89.65:46644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:25.278258 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:28:25.280691 systemd-logind[1266]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:28:25.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-161.35.235.79:22-139.178.89.65:46652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:25.286928 systemd[1]: Started sshd@20-161.35.235.79:22-139.178.89.65:46652.service - OpenSSH per-connection server daemon (139.178.89.65:46652). Jun 25 16:28:25.297193 systemd-logind[1266]: Removed session 20. Jun 25 16:28:25.341000 audit[4625]: USER_ACCT pid=4625 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.342513 sshd[4625]: Accepted publickey for core from 139.178.89.65 port 46652 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:25.343000 audit[4625]: CRED_ACQ pid=4625 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.343000 audit[4625]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd56414bc0 a2=3 a3=7fc444c88480 items=0 ppid=1 pid=4625 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:25.343000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:25.344956 sshd[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:25.351151 systemd-logind[1266]: New session 21 of user core. Jun 25 16:28:25.356405 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:28:25.361000 audit[4625]: USER_START pid=4625 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.364000 audit[4627]: CRED_ACQ pid=4627 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.558448 sshd[4625]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:25.559000 audit[4625]: USER_END pid=4625 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.560000 audit[4625]: CRED_DISP pid=4625 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:25.565430 systemd-logind[1266]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:28:25.565786 systemd[1]: sshd@20-161.35.235.79:22-139.178.89.65:46652.service: Deactivated successfully. Jun 25 16:28:25.566879 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:28:25.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-161.35.235.79:22-139.178.89.65:46652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:25.569271 systemd-logind[1266]: Removed session 21. Jun 25 16:28:25.966486 kubelet[2274]: I0625 16:28:25.966406 2274 topology_manager.go:215] "Topology Admit Handler" podUID="5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b" podNamespace="calico-apiserver" podName="calico-apiserver-7b4df5d698-269lp" Jun 25 16:28:25.985280 systemd[1]: Created slice kubepods-besteffort-pod5f259d46_3e5c_4eef_9df4_3b8b8a9b2e6b.slice - libcontainer container kubepods-besteffort-pod5f259d46_3e5c_4eef_9df4_3b8b8a9b2e6b.slice. Jun 25 16:28:26.026489 kubelet[2274]: W0625 16:28:26.026400 2274 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.4-0-d0607f9d2c" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3815.2.4-0-d0607f9d2c' and this object Jun 25 16:28:26.026835 kubelet[2274]: E0625 16:28:26.026512 2274 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.4-0-d0607f9d2c" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3815.2.4-0-d0607f9d2c' and this object Jun 25 16:28:26.047000 audit[4638]: NETFILTER_CFG table=filter:115 family=2 entries=33 op=nft_register_rule pid=4638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:26.050211 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 16:28:26.050352 kernel: audit: type=1325 audit(1719332906.047:728): table=filter:115 family=2 entries=33 op=nft_register_rule pid=4638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:26.047000 audit[4638]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffe693ab910 a2=0 a3=7ffe693ab8fc items=0 ppid=2449 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:26.068246 kernel: audit: type=1300 audit(1719332906.047:728): arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffe693ab910 a2=0 a3=7ffe693ab8fc items=0 ppid=2449 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:26.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:26.082076 kernel: audit: type=1327 audit(1719332906.047:728): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:26.083000 audit[4638]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:26.094110 kernel: audit: type=1325 audit(1719332906.083:729): table=nat:116 family=2 entries=20 op=nft_register_rule pid=4638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:26.094260 kernel: audit: type=1300 audit(1719332906.083:729): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe693ab910 a2=0 a3=0 items=0 ppid=2449 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:26.083000 audit[4638]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe693ab910 a2=0 a3=0 items=0 ppid=2449 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:26.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:26.107086 kernel: audit: type=1327 audit(1719332906.083:729): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:26.153600 kubelet[2274]: I0625 16:28:26.153546 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqp64\" (UniqueName: \"kubernetes.io/projected/5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b-kube-api-access-gqp64\") pod \"calico-apiserver-7b4df5d698-269lp\" (UID: \"5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b\") " pod="calico-apiserver/calico-apiserver-7b4df5d698-269lp" Jun 25 16:28:26.153910 kubelet[2274]: I0625 16:28:26.153892 2274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b-calico-apiserver-certs\") pod \"calico-apiserver-7b4df5d698-269lp\" (UID: \"5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b\") " pod="calico-apiserver/calico-apiserver-7b4df5d698-269lp" Jun 25 16:28:27.118000 audit[4641]: NETFILTER_CFG table=filter:117 family=2 entries=34 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:27.124137 kernel: audit: type=1325 audit(1719332907.118:730): table=filter:117 family=2 entries=34 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:27.118000 audit[4641]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffff12bb700 a2=0 a3=7ffff12bb6ec items=0 ppid=2449 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:27.132150 kernel: audit: type=1300 audit(1719332907.118:730): arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffff12bb700 a2=0 a3=7ffff12bb6ec items=0 ppid=2449 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:27.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:27.137096 kernel: audit: type=1327 audit(1719332907.118:730): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:27.138000 audit[4641]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:27.143124 kernel: audit: type=1325 audit(1719332907.138:731): table=nat:118 family=2 entries=20 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:27.138000 audit[4641]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffff12bb700 a2=0 a3=0 items=0 ppid=2449 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:27.138000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:27.193925 containerd[1277]: time="2024-06-25T16:28:27.193845615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4df5d698-269lp,Uid:5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:28:27.509201 systemd-networkd[1091]: calia80721563bd: Link UP Jun 25 16:28:27.519137 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:28:27.519289 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia80721563bd: link becomes ready Jun 25 16:28:27.518445 systemd-networkd[1091]: calia80721563bd: Gained carrier Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.352 [INFO][4643] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0 calico-apiserver-7b4df5d698- calico-apiserver 5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b 1148 0 2024-06-25 16:28:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b4df5d698 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-0-d0607f9d2c calico-apiserver-7b4df5d698-269lp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia80721563bd [] []}} ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Namespace="calico-apiserver" Pod="calico-apiserver-7b4df5d698-269lp" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.353 [INFO][4643] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Namespace="calico-apiserver" Pod="calico-apiserver-7b4df5d698-269lp" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.405 [INFO][4654] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" HandleID="k8s-pod-network.15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.436 [INFO][4654] ipam_plugin.go 264: Auto assigning IP ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" HandleID="k8s-pod-network.15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edd00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-0-d0607f9d2c", "pod":"calico-apiserver-7b4df5d698-269lp", "timestamp":"2024-06-25 16:28:27.405250084 +0000 UTC"}, Hostname:"ci-3815.2.4-0-d0607f9d2c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.436 [INFO][4654] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.437 [INFO][4654] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.437 [INFO][4654] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-0-d0607f9d2c' Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.440 [INFO][4654] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.450 [INFO][4654] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.458 [INFO][4654] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.463 [INFO][4654] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.474 [INFO][4654] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.474 [INFO][4654] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.478 [INFO][4654] ipam.go 1685: Creating new handle: k8s-pod-network.15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3 Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.486 [INFO][4654] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.499 [INFO][4654] ipam.go 1216: Successfully claimed IPs: [192.168.34.69/26] block=192.168.34.64/26 handle="k8s-pod-network.15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.500 [INFO][4654] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.69/26] handle="k8s-pod-network.15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" host="ci-3815.2.4-0-d0607f9d2c" Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.500 [INFO][4654] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:28:27.550873 containerd[1277]: 2024-06-25 16:28:27.500 [INFO][4654] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.69/26] IPv6=[] ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" HandleID="k8s-pod-network.15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Workload="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" Jun 25 16:28:27.552120 containerd[1277]: 2024-06-25 16:28:27.504 [INFO][4643] k8s.go 386: Populated endpoint ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Namespace="calico-apiserver" Pod="calico-apiserver-7b4df5d698-269lp" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0", GenerateName:"calico-apiserver-7b4df5d698-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4df5d698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"", Pod:"calico-apiserver-7b4df5d698-269lp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia80721563bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:27.552120 containerd[1277]: 2024-06-25 16:28:27.504 [INFO][4643] k8s.go 387: Calico CNI using IPs: [192.168.34.69/32] ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Namespace="calico-apiserver" Pod="calico-apiserver-7b4df5d698-269lp" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" Jun 25 16:28:27.552120 containerd[1277]: 2024-06-25 16:28:27.504 [INFO][4643] dataplane_linux.go 68: Setting the host side veth name to calia80721563bd ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Namespace="calico-apiserver" Pod="calico-apiserver-7b4df5d698-269lp" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" Jun 25 16:28:27.552120 containerd[1277]: 2024-06-25 16:28:27.520 [INFO][4643] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Namespace="calico-apiserver" Pod="calico-apiserver-7b4df5d698-269lp" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" Jun 25 16:28:27.552120 containerd[1277]: 2024-06-25 16:28:27.520 [INFO][4643] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Namespace="calico-apiserver" Pod="calico-apiserver-7b4df5d698-269lp" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0", GenerateName:"calico-apiserver-7b4df5d698-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4df5d698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-0-d0607f9d2c", ContainerID:"15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3", Pod:"calico-apiserver-7b4df5d698-269lp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia80721563bd", MAC:"ae:71:cf:7f:d8:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:28:27.552120 containerd[1277]: 2024-06-25 16:28:27.544 [INFO][4643] k8s.go 500: Wrote updated endpoint to datastore ContainerID="15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3" Namespace="calico-apiserver" Pod="calico-apiserver-7b4df5d698-269lp" WorkloadEndpoint="ci--3815.2.4--0--d0607f9d2c-k8s-calico--apiserver--7b4df5d698--269lp-eth0" Jun 25 16:28:27.654685 containerd[1277]: time="2024-06-25T16:28:27.654515479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:27.654933 containerd[1277]: time="2024-06-25T16:28:27.654637751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:27.654933 containerd[1277]: time="2024-06-25T16:28:27.654673114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:27.654933 containerd[1277]: time="2024-06-25T16:28:27.654692728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:27.657000 audit[4690]: NETFILTER_CFG table=filter:119 family=2 entries=55 op=nft_register_chain pid=4690 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:28:27.657000 audit[4690]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7ffe4ec60af0 a2=0 a3=7ffe4ec60adc items=0 ppid=3306 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:27.657000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:28:27.747613 systemd[1]: Started cri-containerd-15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3.scope - libcontainer container 15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3. Jun 25 16:28:27.812000 audit: BPF prog-id=170 op=LOAD Jun 25 16:28:27.816000 audit: BPF prog-id=171 op=LOAD Jun 25 16:28:27.816000 audit[4696]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4686 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:27.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653765633663663765343335323665333061313833343038366435 Jun 25 16:28:27.816000 audit: BPF prog-id=172 op=LOAD Jun 25 16:28:27.816000 audit[4696]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4686 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:27.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653765633663663765343335323665333061313833343038366435 Jun 25 16:28:27.816000 audit: BPF prog-id=172 op=UNLOAD Jun 25 16:28:27.817000 audit: BPF prog-id=171 op=UNLOAD Jun 25 16:28:27.817000 audit: BPF prog-id=173 op=LOAD Jun 25 16:28:27.817000 audit[4696]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4686 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:27.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653765633663663765343335323665333061313833343038366435 Jun 25 16:28:27.882980 containerd[1277]: time="2024-06-25T16:28:27.882914282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4df5d698-269lp,Uid:5f259d46-3e5c-4eef-9df4-3b8b8a9b2e6b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3\"" Jun 25 16:28:27.886739 containerd[1277]: time="2024-06-25T16:28:27.886654961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:28:29.386643 systemd-networkd[1091]: calia80721563bd: Gained IPv6LL Jun 25 16:28:30.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-161.35.235.79:22-139.178.89.65:40908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:30.577887 systemd[1]: Started sshd@21-161.35.235.79:22-139.178.89.65:40908.service - OpenSSH per-connection server daemon (139.178.89.65:40908). Jun 25 16:28:30.687000 audit[4725]: USER_ACCT pid=4725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:30.689000 audit[4725]: CRED_ACQ pid=4725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:30.689000 audit[4725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9d7a10b0 a2=3 a3=7fb18e5ee480 items=0 ppid=1 pid=4725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:30.689000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:30.691167 sshd[4725]: Accepted publickey for core from 139.178.89.65 port 40908 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:30.691713 sshd[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:30.706999 systemd-logind[1266]: New session 22 of user core. Jun 25 16:28:30.710326 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:28:30.722000 audit[4725]: USER_START pid=4725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:30.726000 audit[4727]: CRED_ACQ pid=4727 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:31.406000 audit[4735]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:31.409645 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:28:31.409779 kernel: audit: type=1325 audit(1719332911.406:745): table=filter:120 family=2 entries=22 op=nft_register_rule pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:31.413988 kernel: audit: type=1300 audit(1719332911.406:745): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd2c95cca0 a2=0 a3=7ffd2c95cc8c items=0 ppid=2449 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:31.406000 audit[4735]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd2c95cca0 a2=0 a3=7ffd2c95cc8c items=0 ppid=2449 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:31.419774 kernel: audit: type=1327 audit(1719332911.406:745): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:31.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:31.425000 audit[4735]: NETFILTER_CFG table=nat:121 family=2 entries=104 op=nft_register_chain pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:31.433476 kernel: audit: type=1325 audit(1719332911.425:746): table=nat:121 family=2 entries=104 op=nft_register_chain pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:31.433588 kernel: audit: type=1300 audit(1719332911.425:746): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd2c95cca0 a2=0 a3=7ffd2c95cc8c items=0 ppid=2449 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:31.425000 audit[4735]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd2c95cca0 a2=0 a3=7ffd2c95cc8c items=0 ppid=2449 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:31.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:31.443064 kernel: audit: type=1327 audit(1719332911.425:746): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:31.477066 containerd[1277]: time="2024-06-25T16:28:31.476990025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:31.483004 containerd[1277]: time="2024-06-25T16:28:31.482890748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:28:31.486571 containerd[1277]: time="2024-06-25T16:28:31.486504348Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:31.496690 containerd[1277]: time="2024-06-25T16:28:31.496635023Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:31.510380 containerd[1277]: time="2024-06-25T16:28:31.510326629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:31.511982 containerd[1277]: time="2024-06-25T16:28:31.511907688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.624917017s" Jun 25 16:28:31.512203 containerd[1277]: time="2024-06-25T16:28:31.511988392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:28:31.566925 containerd[1277]: time="2024-06-25T16:28:31.563853402Z" level=info msg="CreateContainer within sandbox \"15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:28:31.626501 containerd[1277]: time="2024-06-25T16:28:31.626442612Z" level=info msg="CreateContainer within sandbox \"15e7ec6cf7e43526e30a1834086d52726e5eab95b0ab3a5b56e2613442043df3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d0fb77890fdebb31fa56464fa277c936b4826fa61c22e1a8e6920e00db55d73\"" Jun 25 16:28:31.627685 containerd[1277]: time="2024-06-25T16:28:31.627645080Z" level=info msg="StartContainer for \"2d0fb77890fdebb31fa56464fa277c936b4826fa61c22e1a8e6920e00db55d73\"" Jun 25 16:28:31.726395 systemd[1]: Started cri-containerd-2d0fb77890fdebb31fa56464fa277c936b4826fa61c22e1a8e6920e00db55d73.scope - libcontainer container 2d0fb77890fdebb31fa56464fa277c936b4826fa61c22e1a8e6920e00db55d73. Jun 25 16:28:31.745565 sshd[4725]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:31.745000 audit[4725]: USER_END pid=4725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:31.752200 kernel: audit: type=1106 audit(1719332911.745:747): pid=4725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:31.753567 systemd-logind[1266]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:28:31.754586 systemd[1]: sshd@21-161.35.235.79:22-139.178.89.65:40908.service: Deactivated successfully. Jun 25 16:28:31.755655 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:28:31.745000 audit[4725]: CRED_DISP pid=4725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:31.760957 systemd-logind[1266]: Removed session 22. Jun 25 16:28:31.761206 kernel: audit: type=1104 audit(1719332911.745:748): pid=4725 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:31.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-161.35.235.79:22-139.178.89.65:40908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:31.768143 kernel: audit: type=1131 audit(1719332911.753:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-161.35.235.79:22-139.178.89.65:40908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:31.795000 audit: BPF prog-id=174 op=LOAD Jun 25 16:28:31.798066 kernel: audit: type=1334 audit(1719332911.795:750): prog-id=174 op=LOAD Jun 25 16:28:31.797000 audit: BPF prog-id=175 op=LOAD Jun 25 16:28:31.797000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4686 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:31.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306662373738393066646562623331666135363436346661323737 Jun 25 16:28:31.798000 audit: BPF prog-id=176 op=LOAD Jun 25 16:28:31.798000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4686 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:31.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306662373738393066646562623331666135363436346661323737 Jun 25 16:28:31.798000 audit: BPF prog-id=176 op=UNLOAD Jun 25 16:28:31.798000 audit: BPF prog-id=175 op=UNLOAD Jun 25 16:28:31.798000 audit: BPF prog-id=177 op=LOAD Jun 25 16:28:31.798000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4686 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:31.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264306662373738393066646562623331666135363436346661323737 Jun 25 16:28:31.884991 containerd[1277]: time="2024-06-25T16:28:31.884922661Z" level=info msg="StartContainer for \"2d0fb77890fdebb31fa56464fa277c936b4826fa61c22e1a8e6920e00db55d73\" returns successfully" Jun 25 16:28:32.115040 kubelet[2274]: I0625 16:28:32.114860 2274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b4df5d698-269lp" podStartSLOduration=3.476642111 podStartE2EDuration="7.110995545s" podCreationTimestamp="2024-06-25 16:28:25 +0000 UTC" firstStartedPulling="2024-06-25 16:28:27.885457734 +0000 UTC m=+90.762060957" lastFinishedPulling="2024-06-25 16:28:31.519811181 +0000 UTC m=+94.396414391" observedRunningTime="2024-06-25 16:28:32.109467048 +0000 UTC m=+94.986070277" watchObservedRunningTime="2024-06-25 16:28:32.110995545 +0000 UTC m=+94.987598755" Jun 25 16:28:32.140000 audit[4783]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=4783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:32.140000 audit[4783]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffbc3abb90 a2=0 a3=7fffbc3abb7c items=0 ppid=2449 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:32.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:32.145000 audit[4783]: NETFILTER_CFG table=nat:123 family=2 entries=44 op=nft_register_rule pid=4783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:32.145000 audit[4783]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7fffbc3abb90 a2=0 a3=7fffbc3abb7c items=0 ppid=2449 pid=4783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:32.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:33.174000 audit[4787]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:33.174000 audit[4787]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe1a1ca050 a2=0 a3=7ffe1a1ca03c items=0 ppid=2449 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:33.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:33.178000 audit[4787]: NETFILTER_CFG table=nat:125 family=2 entries=51 op=nft_register_chain pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:33.178000 audit[4787]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffe1a1ca050 a2=0 a3=7ffe1a1ca03c items=0 ppid=2449 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:33.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:34.225000 audit[4789]: NETFILTER_CFG table=filter:126 family=2 entries=8 op=nft_register_rule pid=4789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:34.225000 audit[4789]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffedaa84130 a2=0 a3=7ffedaa8411c items=0 ppid=2449 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:34.269000 audit[4789]: NETFILTER_CFG table=nat:127 family=2 entries=54 op=nft_register_rule pid=4789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:34.269000 audit[4789]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffedaa84130 a2=0 a3=7ffedaa8411c items=0 ppid=2449 pid=4789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:34.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:36.770993 systemd[1]: Started sshd@22-161.35.235.79:22-139.178.89.65:58776.service - OpenSSH per-connection server daemon (139.178.89.65:58776). Jun 25 16:28:36.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-161.35.235.79:22-139.178.89.65:58776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:36.774595 kernel: kauditd_printk_skb: 29 callbacks suppressed Jun 25 16:28:36.775101 kernel: audit: type=1130 audit(1719332916.771:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-161.35.235.79:22-139.178.89.65:58776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:36.865000 audit[4814]: USER_ACCT pid=4814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:36.871353 sshd[4814]: Accepted publickey for core from 139.178.89.65 port 58776 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:36.872294 kernel: audit: type=1101 audit(1719332916.865:763): pid=4814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:36.872000 audit[4814]: CRED_ACQ pid=4814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:36.878099 kernel: audit: type=1103 audit(1719332916.872:764): pid=4814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:36.878260 kernel: audit: type=1006 audit(1719332916.872:765): pid=4814 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:28:36.878799 sshd[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:36.872000 audit[4814]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc47d8b480 a2=3 a3=7f2ac85ed480 items=0 ppid=1 pid=4814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:36.887085 kernel: audit: type=1300 audit(1719332916.872:765): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc47d8b480 a2=3 a3=7f2ac85ed480 items=0 ppid=1 pid=4814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:36.901310 systemd-logind[1266]: New session 23 of user core. Jun 25 16:28:36.906645 kernel: audit: type=1327 audit(1719332916.872:765): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:36.872000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:36.906532 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:28:36.919000 audit[4814]: USER_START pid=4814 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:36.930240 kernel: audit: type=1105 audit(1719332916.919:766): pid=4814 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:36.928000 audit[4816]: CRED_ACQ pid=4816 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:36.941933 kernel: audit: type=1103 audit(1719332916.928:767): pid=4816 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:37.259418 sshd[4814]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:37.264000 audit[4814]: USER_END pid=4814 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:37.273158 kernel: audit: type=1106 audit(1719332917.264:768): pid=4814 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:37.273811 systemd[1]: sshd@22-161.35.235.79:22-139.178.89.65:58776.service: Deactivated successfully. Jun 25 16:28:37.283192 kernel: audit: type=1104 audit(1719332917.265:769): pid=4814 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:37.265000 audit[4814]: CRED_DISP pid=4814 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:37.279770 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:28:37.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-161.35.235.79:22-139.178.89.65:58776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:37.285869 systemd-logind[1266]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:28:37.293553 systemd-logind[1266]: Removed session 23. Jun 25 16:28:38.412844 kubelet[2274]: E0625 16:28:38.412794 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:42.289666 systemd[1]: Started sshd@23-161.35.235.79:22-139.178.89.65:58788.service - OpenSSH per-connection server daemon (139.178.89.65:58788). Jun 25 16:28:42.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-161.35.235.79:22-139.178.89.65:58788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.295188 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:42.295411 kernel: audit: type=1130 audit(1719332922.290:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-161.35.235.79:22-139.178.89.65:58788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.353000 audit[4829]: USER_ACCT pid=4829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.355359 sshd[4829]: Accepted publickey for core from 139.178.89.65 port 58788 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:42.364351 kernel: audit: type=1101 audit(1719332922.353:772): pid=4829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.365000 audit[4829]: CRED_ACQ pid=4829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.373108 sshd[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:42.374089 kernel: audit: type=1103 audit(1719332922.365:773): pid=4829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.379122 kernel: audit: type=1006 audit(1719332922.371:774): pid=4829 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:28:42.371000 audit[4829]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8eeb2190 a2=3 a3=7f038e603480 items=0 ppid=1 pid=4829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:42.387402 kernel: audit: type=1300 audit(1719332922.371:774): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8eeb2190 a2=3 a3=7f038e603480 items=0 ppid=1 pid=4829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:42.386671 systemd-logind[1266]: New session 24 of user core. Jun 25 16:28:42.391517 kernel: audit: type=1327 audit(1719332922.371:774): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:42.371000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:42.390398 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:28:42.406000 audit[4829]: USER_START pid=4829 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.413073 kernel: audit: type=1105 audit(1719332922.406:775): pid=4829 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.408000 audit[4831]: CRED_ACQ pid=4831 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.420119 kernel: audit: type=1103 audit(1719332922.408:776): pid=4831 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.597611 sshd[4829]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:42.599000 audit[4829]: USER_END pid=4829 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.603537 systemd[1]: sshd@23-161.35.235.79:22-139.178.89.65:58788.service: Deactivated successfully. Jun 25 16:28:42.604888 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:28:42.608218 kernel: audit: type=1106 audit(1719332922.599:777): pid=4829 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.599000 audit[4829]: CRED_DISP pid=4829 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.616246 kernel: audit: type=1104 audit(1719332922.599:778): pid=4829 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:42.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-161.35.235.79:22-139.178.89.65:58788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:42.616921 systemd-logind[1266]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:28:42.619298 systemd-logind[1266]: Removed session 24. Jun 25 16:28:47.626087 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:28:47.626321 kernel: audit: type=1130 audit(1719332927.622:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-161.35.235.79:22-139.178.89.65:47492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:47.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-161.35.235.79:22-139.178.89.65:47492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:47.623546 systemd[1]: Started sshd@24-161.35.235.79:22-139.178.89.65:47492.service - OpenSSH per-connection server daemon (139.178.89.65:47492). Jun 25 16:28:47.677000 audit[4847]: USER_ACCT pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.679610 sshd[4847]: Accepted publickey for core from 139.178.89.65 port 47492 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:47.683849 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:47.682000 audit[4847]: CRED_ACQ pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.686369 kernel: audit: type=1101 audit(1719332927.677:781): pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.686505 kernel: audit: type=1103 audit(1719332927.682:782): pid=4847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.700001 kernel: audit: type=1006 audit(1719332927.682:783): pid=4847 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:28:47.682000 audit[4847]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2b726c30 a2=3 a3=7ff010c22480 items=0 ppid=1 pid=4847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:47.707118 kernel: audit: type=1300 audit(1719332927.682:783): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2b726c30 a2=3 a3=7ff010c22480 items=0 ppid=1 pid=4847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:47.682000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:47.711110 kernel: audit: type=1327 audit(1719332927.682:783): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:47.714334 systemd-logind[1266]: New session 25 of user core. Jun 25 16:28:47.717799 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:28:47.736508 kernel: audit: type=1105 audit(1719332927.727:784): pid=4847 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.727000 audit[4847]: USER_START pid=4847 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.730000 audit[4851]: CRED_ACQ pid=4851 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.742092 kernel: audit: type=1103 audit(1719332927.730:785): pid=4851 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.916365 sshd[4847]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:47.917000 audit[4847]: USER_END pid=4847 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.926170 kernel: audit: type=1106 audit(1719332927.917:786): pid=4847 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.919000 audit[4847]: CRED_DISP pid=4847 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.927659 systemd-logind[1266]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:28:47.930687 systemd[1]: sshd@24-161.35.235.79:22-139.178.89.65:47492.service: Deactivated successfully. Jun 25 16:28:47.933276 kernel: audit: type=1104 audit(1719332927.919:787): pid=4847 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:47.932050 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:28:47.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-161.35.235.79:22-139.178.89.65:47492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:47.937001 systemd-logind[1266]: Removed session 25. Jun 25 16:28:52.484000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=526918 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:52.485000 audit[2162]: AVC avc: denied { watch } for pid=2162 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c497,c580 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:52.484000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00156f800 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:28:52.484000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:52.485000 audit[2162]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0015ccf40 a2=fc6 a3=0 items=0 ppid=1996 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c497,c580 key=(null) Jun 25 16:28:52.485000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:52.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-161.35.235.79:22-139.178.89.65:47506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:52.941945 systemd[1]: Started sshd@25-161.35.235.79:22-139.178.89.65:47506.service - OpenSSH per-connection server daemon (139.178.89.65:47506). Jun 25 16:28:52.943472 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:28:52.943513 kernel: audit: type=1130 audit(1719332932.941:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-161.35.235.79:22-139.178.89.65:47506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:52.996000 audit[4881]: USER_ACCT pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:52.998162 sshd[4881]: Accepted publickey for core from 139.178.89.65 port 47506 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:53.005157 kernel: audit: type=1101 audit(1719332932.996:792): pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.005000 audit[4881]: CRED_ACQ pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.007642 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:53.014367 kernel: audit: type=1103 audit(1719332933.005:793): pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.014563 kernel: audit: type=1006 audit(1719332933.005:794): pid=4881 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 16:28:53.019197 kernel: audit: type=1300 audit(1719332933.005:794): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb4941e00 a2=3 a3=7fdfa0e3b480 items=0 ppid=1 pid=4881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.005000 audit[4881]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb4941e00 a2=3 a3=7fdfa0e3b480 items=0 ppid=1 pid=4881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.005000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:53.026481 systemd-logind[1266]: New session 26 of user core. Jun 25 16:28:53.030999 kernel: audit: type=1327 audit(1719332933.005:794): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:53.029353 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:28:53.042000 audit[4881]: USER_START pid=4881 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.051600 kernel: audit: type=1105 audit(1719332933.042:795): pid=4881 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.045000 audit[4883]: CRED_ACQ pid=4883 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.059216 kernel: audit: type=1103 audit(1719332933.045:796): pid=4883 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.280872 sshd[4881]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:53.283000 audit[4881]: USER_END pid=4881 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.287527 systemd[1]: sshd@25-161.35.235.79:22-139.178.89.65:47506.service: Deactivated successfully. Jun 25 16:28:53.288799 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:28:53.291066 systemd-logind[1266]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:28:53.292220 kernel: audit: type=1106 audit(1719332933.283:797): pid=4881 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.292389 kernel: audit: type=1104 audit(1719332933.283:798): pid=4881 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.283000 audit[4881]: CRED_DISP pid=4881 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:53.293041 systemd-logind[1266]: Removed session 26. Jun 25 16:28:53.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-161.35.235.79:22-139.178.89.65:47506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:53.723000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=526918 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:53.723000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c00d2e9b30 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:28:53.723000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:53.723000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:53.723000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c009b927c0 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:28:53.723000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:53.723000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=526912 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:53.723000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7a a1=c016c7a800 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:28:53.723000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:53.724000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=526920 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:53.724000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c00d2e9b90 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:28:53.724000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:53.726000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=526918 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:53.726000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c00e11c330 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:28:53.726000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:53.734000 audit[2155]: AVC avc: denied { watch } for pid=2155 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=526914 scontext=system_u:system_r:container_t:s0:c243,c408 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:53.734000 audit[2155]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c00d2e9d70 a2=fc6 a3=0 items=0 ppid=1991 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c243,c408 key=(null) Jun 25 16:28:53.734000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136312E33352E3233352E3739002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:55.412582 kubelet[2274]: E0625 16:28:55.412528 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:58.319837 kernel: kauditd_printk_skb: 19 callbacks suppressed Jun 25 16:28:58.320005 kernel: audit: type=1130 audit(1719332938.310:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-161.35.235.79:22-139.178.89.65:42376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:58.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-161.35.235.79:22-139.178.89.65:42376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:58.310359 systemd[1]: Started sshd@26-161.35.235.79:22-139.178.89.65:42376.service - OpenSSH per-connection server daemon (139.178.89.65:42376). Jun 25 16:28:58.362000 audit[4900]: USER_ACCT pid=4900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.363566 sshd[4900]: Accepted publickey for core from 139.178.89.65 port 42376 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:28:58.370125 kernel: audit: type=1101 audit(1719332938.362:807): pid=4900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.370000 audit[4900]: CRED_ACQ pid=4900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.372198 sshd[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:28:58.378424 kernel: audit: type=1103 audit(1719332938.370:808): pid=4900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.381905 systemd-logind[1266]: New session 27 of user core. Jun 25 16:28:58.396000 kernel: audit: type=1006 audit(1719332938.370:809): pid=4900 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 16:28:58.396086 kernel: audit: type=1300 audit(1719332938.370:809): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6e3e9d20 a2=3 a3=7ff5851d4480 items=0 ppid=1 pid=4900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:58.396123 kernel: audit: type=1327 audit(1719332938.370:809): proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:58.370000 audit[4900]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6e3e9d20 a2=3 a3=7ff5851d4480 items=0 ppid=1 pid=4900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:58.370000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:28:58.395418 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:28:58.405000 audit[4900]: USER_START pid=4900 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.414077 kernel: audit: type=1105 audit(1719332938.405:810): pid=4900 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.409000 audit[4902]: CRED_ACQ pid=4902 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.421264 kernel: audit: type=1103 audit(1719332938.409:811): pid=4902 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.644152 sshd[4900]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:58.646000 audit[4900]: USER_END pid=4900 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.658706 kernel: audit: type=1106 audit(1719332938.646:812): pid=4900 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.658958 kernel: audit: type=1104 audit(1719332938.650:813): pid=4900 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.650000 audit[4900]: CRED_DISP pid=4900 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:58.668905 systemd-logind[1266]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:28:58.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-161.35.235.79:22-139.178.89.65:42376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:58.669812 systemd[1]: sshd@26-161.35.235.79:22-139.178.89.65:42376.service: Deactivated successfully. Jun 25 16:28:58.672440 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:28:58.674098 systemd-logind[1266]: Removed session 27.