Dec 13 02:08:41.120970 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:08:41.121014 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:08:41.121031 kernel: BIOS-provided physical RAM map: Dec 13 02:08:41.121044 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 02:08:41.121056 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 02:08:41.121068 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 02:08:41.121086 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 02:08:41.121100 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 02:08:41.121112 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 02:08:41.121125 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 02:08:41.121139 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 02:08:41.121152 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 02:08:41.121165 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 02:08:41.121178 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 02:08:41.121199 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 02:08:41.121214 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 02:08:41.121228 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 02:08:41.121241 kernel: NX (Execute Disable) protection: active Dec 13 02:08:41.121256 kernel: efi: EFI v2.70 by EDK II Dec 13 02:08:41.121271 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 02:08:41.121285 kernel: random: crng init done Dec 13 02:08:41.121300 kernel: SMBIOS 2.4 present. Dec 13 02:08:41.121321 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 02:08:41.121337 kernel: Hypervisor detected: KVM Dec 13 02:08:41.121351 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:08:41.121364 kernel: kvm-clock: cpu 0, msr 5d19b001, primary cpu clock Dec 13 02:08:41.121379 kernel: kvm-clock: using sched offset of 13019243827 cycles Dec 13 02:08:41.121417 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:08:41.121439 kernel: tsc: Detected 2299.998 MHz processor Dec 13 02:08:41.121454 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:08:41.121469 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:08:41.121485 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 02:08:41.121504 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:08:41.121520 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 02:08:41.121535 kernel: Using GB pages for direct mapping Dec 13 02:08:41.121551 kernel: Secure boot disabled Dec 13 02:08:41.121566 kernel: ACPI: Early table checksum verification disabled Dec 13 02:08:41.121582 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 02:08:41.121597 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 02:08:41.121613 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 02:08:41.121638 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 02:08:41.121654 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 02:08:41.121671 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 02:08:41.121687 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 02:08:41.121704 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 02:08:41.121719 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 02:08:41.121737 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 02:08:41.121753 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 02:08:41.121769 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 02:08:41.121785 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 02:08:41.121800 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 02:08:41.121816 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 02:08:41.121831 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 02:08:41.121847 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 02:08:41.121863 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 02:08:41.121882 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 02:08:41.121898 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 02:08:41.121913 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:08:41.121928 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:08:41.121945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 02:08:41.121960 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 02:08:41.121977 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 02:08:41.121994 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 02:08:41.122011 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 02:08:41.122032 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Dec 13 02:08:41.122049 kernel: Zone ranges: Dec 13 02:08:41.122065 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:08:41.122082 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:08:41.122098 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:08:41.122114 kernel: Movable zone start for each node Dec 13 02:08:41.122131 kernel: Early memory node ranges Dec 13 02:08:41.122148 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 02:08:41.122164 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 02:08:41.122183 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 02:08:41.122200 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 02:08:41.122214 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 02:08:41.122230 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:08:41.122245 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 02:08:41.122260 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:08:41.122275 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 02:08:41.122289 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 02:08:41.122306 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 02:08:41.122326 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 02:08:41.122341 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 02:08:41.122357 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:08:41.122374 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:08:41.122405 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:08:41.122422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:08:41.122445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:08:41.122461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:08:41.122477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:08:41.122497 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:08:41.122530 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:08:41.122547 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 02:08:41.122563 kernel: Booting paravirtualized kernel on KVM Dec 13 02:08:41.122580 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:08:41.122597 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:08:41.122613 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:08:41.122630 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:08:41.122646 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:08:41.122666 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:08:41.122683 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:08:41.122702 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 02:08:41.122719 kernel: Policy zone: Normal Dec 13 02:08:41.122737 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:08:41.122754 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:08:41.122771 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 02:08:41.122788 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:08:41.122805 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:08:41.122826 kernel: Memory: 7515400K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 344884K reserved, 0K cma-reserved) Dec 13 02:08:41.122844 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:08:41.122860 kernel: Kernel/User page tables isolation: enabled Dec 13 02:08:41.122880 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:08:41.122897 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:08:41.122913 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:08:41.122931 kernel: rcu: RCU event tracing is enabled. Dec 13 02:08:41.122948 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:08:41.122970 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:08:41.122999 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:08:41.123017 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:08:41.123037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:08:41.123053 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:08:41.123070 kernel: Console: colour dummy device 80x25 Dec 13 02:08:41.123087 kernel: printk: console [ttyS0] enabled Dec 13 02:08:41.123104 kernel: ACPI: Core revision 20210730 Dec 13 02:08:41.123122 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:08:41.123140 kernel: x2apic enabled Dec 13 02:08:41.123162 kernel: Switched APIC routing to physical x2apic. Dec 13 02:08:41.123179 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 02:08:41.123197 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:08:41.123215 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 02:08:41.123232 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 02:08:41.123250 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 02:08:41.123268 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:08:41.123290 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 02:08:41.123308 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 02:08:41.123326 kernel: Spectre V2 : Mitigation: IBRS Dec 13 02:08:41.123343 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:08:41.123361 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:08:41.123379 kernel: RETBleed: Mitigation: IBRS Dec 13 02:08:41.123419 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:08:41.123523 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 02:08:41.123552 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:08:41.123578 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 02:08:41.123595 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:08:41.123611 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:08:41.123627 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:08:41.123644 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:08:41.123661 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:08:41.123685 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 02:08:41.123702 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:08:41.123720 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:08:41.123741 kernel: LSM: Security Framework initializing Dec 13 02:08:41.123763 kernel: SELinux: Initializing. Dec 13 02:08:41.123781 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:08:41.123799 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:08:41.123817 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 02:08:41.123835 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 02:08:41.123853 kernel: signal: max sigframe size: 1776 Dec 13 02:08:41.123871 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:08:41.123889 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:08:41.123910 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:08:41.123927 kernel: x86: Booting SMP configuration: Dec 13 02:08:41.123942 kernel: .... node #0, CPUs: #1 Dec 13 02:08:41.123959 kernel: kvm-clock: cpu 1, msr 5d19b041, secondary cpu clock Dec 13 02:08:41.123976 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:08:41.123995 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:08:41.124013 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:08:41.124031 kernel: smpboot: Max logical packages: 1 Dec 13 02:08:41.124053 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 02:08:41.124072 kernel: devtmpfs: initialized Dec 13 02:08:41.124091 kernel: x86/mm: Memory block size: 128MB Dec 13 02:08:41.124109 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 02:08:41.124127 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:08:41.124145 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:08:41.124163 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:08:41.124182 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:08:41.124200 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:08:41.124221 kernel: audit: type=2000 audit(1734055719.785:1): state=initialized audit_enabled=0 res=1 Dec 13 02:08:41.124247 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:08:41.124266 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:08:41.124284 kernel: cpuidle: using governor menu Dec 13 02:08:41.124300 kernel: ACPI: bus type PCI registered Dec 13 02:08:41.124317 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:08:41.124335 kernel: dca service started, version 1.12.1 Dec 13 02:08:41.124353 kernel: PCI: Using configuration type 1 for base access Dec 13 02:08:41.124371 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:08:41.124409 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:08:41.124427 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:08:41.124444 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:08:41.124462 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:08:41.124480 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:08:41.124497 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:08:41.124515 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:08:41.124533 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:08:41.124551 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:08:41.124573 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:08:41.124591 kernel: ACPI: Interpreter enabled Dec 13 02:08:41.124609 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:08:41.124632 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:08:41.124650 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:08:41.124668 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:08:41.124685 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:08:41.124921 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:08:41.125112 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:08:41.125137 kernel: PCI host bridge to bus 0000:00 Dec 13 02:08:41.125307 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:08:41.125475 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:08:41.125625 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:08:41.125779 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 02:08:41.125933 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:08:41.126120 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:08:41.126335 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 02:08:41.126543 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:08:41.126709 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:08:41.126891 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 02:08:41.127067 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 02:08:41.127234 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 02:08:41.127444 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:08:41.127611 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 02:08:41.127773 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 02:08:41.127944 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:08:41.128114 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 02:08:41.128284 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 02:08:41.128313 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:08:41.128332 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:08:41.128351 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:08:41.128369 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:08:41.128403 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:08:41.128429 kernel: iommu: Default domain type: Translated Dec 13 02:08:41.128445 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:08:41.128461 kernel: vgaarb: loaded Dec 13 02:08:41.128476 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:08:41.128496 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:08:41.128511 kernel: PTP clock support registered Dec 13 02:08:41.128526 kernel: Registered efivars operations Dec 13 02:08:41.128542 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:08:41.128559 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:08:41.128574 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 02:08:41.128589 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 02:08:41.128604 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 02:08:41.128619 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 02:08:41.128639 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 02:08:41.128655 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:08:41.128672 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:08:41.128690 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:08:41.128716 kernel: pnp: PnP ACPI init Dec 13 02:08:41.128733 kernel: pnp: PnP ACPI: found 7 devices Dec 13 02:08:41.128751 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:08:41.128768 kernel: NET: Registered PF_INET protocol family Dec 13 02:08:41.128785 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:08:41.128806 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 02:08:41.128824 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:08:41.128840 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:08:41.128857 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:08:41.128873 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 02:08:41.128890 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:08:41.128906 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:08:41.128922 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:08:41.128938 kernel: NET: Registered PF_XDP protocol family Dec 13 02:08:41.129121 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:08:41.129285 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:08:41.129997 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:08:41.130188 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 02:08:41.130403 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:08:41.130430 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:08:41.130449 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:08:41.130473 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 02:08:41.130491 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:08:41.130509 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:08:41.130525 kernel: clocksource: Switched to clocksource tsc Dec 13 02:08:41.130541 kernel: Initialise system trusted keyrings Dec 13 02:08:41.130559 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 02:08:41.130576 kernel: Key type asymmetric registered Dec 13 02:08:41.130593 kernel: Asymmetric key parser 'x509' registered Dec 13 02:08:41.130611 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:08:41.130632 kernel: io scheduler mq-deadline registered Dec 13 02:08:41.130650 kernel: io scheduler kyber registered Dec 13 02:08:41.130666 kernel: io scheduler bfq registered Dec 13 02:08:41.130693 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:08:41.130716 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:08:41.130895 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 02:08:41.130918 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 02:08:41.131085 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 02:08:41.131108 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:08:41.131285 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 02:08:41.131307 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:08:41.131325 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:08:41.131343 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:08:41.131361 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 02:08:41.131378 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 02:08:41.131576 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 02:08:41.131601 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:08:41.131624 kernel: i8042: Warning: Keylock active Dec 13 02:08:41.131641 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:08:41.131659 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:08:41.131828 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:08:41.131982 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:08:41.132132 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:08:40 UTC (1734055720) Dec 13 02:08:41.132288 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:08:41.132310 kernel: intel_pstate: CPU model not supported Dec 13 02:08:41.132333 kernel: pstore: Registered efi as persistent store backend Dec 13 02:08:41.132351 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:08:41.132368 kernel: Segment Routing with IPv6 Dec 13 02:08:41.132407 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:08:41.132425 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:08:41.132443 kernel: Key type dns_resolver registered Dec 13 02:08:41.132460 kernel: IPI shorthand broadcast: enabled Dec 13 02:08:41.132478 kernel: sched_clock: Marking stable (765775970, 175506336)->(1002466518, -61184212) Dec 13 02:08:41.132496 kernel: registered taskstats version 1 Dec 13 02:08:41.132517 kernel: Loading compiled-in X.509 certificates Dec 13 02:08:41.132535 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:08:41.132554 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:08:41.132571 kernel: Key type .fscrypt registered Dec 13 02:08:41.132588 kernel: Key type fscrypt-provisioning registered Dec 13 02:08:41.132606 kernel: pstore: Using crash dump compression: deflate Dec 13 02:08:41.132624 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:08:41.132641 kernel: ima: No architecture policies found Dec 13 02:08:41.132663 kernel: clk: Disabling unused clocks Dec 13 02:08:41.132680 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:08:41.132699 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:08:41.132717 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:08:41.132736 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:08:41.132754 kernel: Run /init as init process Dec 13 02:08:41.132771 kernel: with arguments: Dec 13 02:08:41.132788 kernel: /init Dec 13 02:08:41.132805 kernel: with environment: Dec 13 02:08:41.132823 kernel: HOME=/ Dec 13 02:08:41.132844 kernel: TERM=linux Dec 13 02:08:41.132861 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:08:41.132882 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:08:41.132904 systemd[1]: Detected virtualization kvm. Dec 13 02:08:41.132923 systemd[1]: Detected architecture x86-64. Dec 13 02:08:41.132941 systemd[1]: Running in initrd. Dec 13 02:08:41.132959 systemd[1]: No hostname configured, using default hostname. Dec 13 02:08:41.132980 systemd[1]: Hostname set to . Dec 13 02:08:41.132999 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:08:41.133018 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:08:41.133036 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:08:41.133054 systemd[1]: Reached target cryptsetup.target. Dec 13 02:08:41.133072 systemd[1]: Reached target paths.target. Dec 13 02:08:41.133090 systemd[1]: Reached target slices.target. Dec 13 02:08:41.133109 systemd[1]: Reached target swap.target. Dec 13 02:08:41.133131 systemd[1]: Reached target timers.target. Dec 13 02:08:41.133150 systemd[1]: Listening on iscsid.socket. Dec 13 02:08:41.133169 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:08:41.133187 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:08:41.133205 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:08:41.133224 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:08:41.133242 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:08:41.133260 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:08:41.133291 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:08:41.133328 systemd[1]: Reached target sockets.target. Dec 13 02:08:41.133350 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:08:41.133370 systemd[1]: Finished network-cleanup.service. Dec 13 02:08:41.133412 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:08:41.133432 systemd[1]: Starting systemd-journald.service... Dec 13 02:08:41.133454 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:08:41.133472 kernel: audit: type=1334 audit(1734055721.116:2): prog-id=6 op=LOAD Dec 13 02:08:41.133491 systemd[1]: Starting systemd-resolved.service... Dec 13 02:08:41.133517 systemd-journald[190]: Journal started Dec 13 02:08:41.133610 systemd-journald[190]: Runtime Journal (/run/log/journal/6eaa6ad812f755edf60f82b4f2a89e0f) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:08:41.116000 audit: BPF prog-id=6 op=LOAD Dec 13 02:08:41.145411 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:08:41.147405 systemd[1]: Started systemd-journald.service. Dec 13 02:08:41.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.150187 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:08:41.164536 kernel: audit: type=1130 audit(1734055721.148:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.164586 kernel: audit: type=1130 audit(1734055721.155:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.157235 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:08:41.173532 kernel: audit: type=1130 audit(1734055721.164:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.161664 systemd-modules-load[191]: Inserted module 'overlay' Dec 13 02:08:41.184518 kernel: audit: type=1130 audit(1734055721.176:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.166168 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:08:41.181535 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:08:41.194405 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:08:41.213402 kernel: audit: type=1130 audit(1734055721.207:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.208962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:08:41.221699 systemd-resolved[192]: Positive Trust Anchors: Dec 13 02:08:41.221720 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:08:41.221773 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:08:41.228784 systemd-resolved[192]: Defaulting to hostname 'linux'. Dec 13 02:08:41.230456 systemd[1]: Started systemd-resolved.service. Dec 13 02:08:41.230624 systemd[1]: Reached target nss-lookup.target. Dec 13 02:08:41.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.236405 kernel: audit: type=1130 audit(1734055721.229:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.242786 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:08:41.259788 kernel: audit: type=1130 audit(1734055721.245:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.259825 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:08:41.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.254723 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:08:41.263505 kernel: Bridge firewalling registered Dec 13 02:08:41.263985 systemd-modules-load[191]: Inserted module 'br_netfilter' Dec 13 02:08:41.272313 dracut-cmdline[206]: dracut-dracut-053 Dec 13 02:08:41.276598 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:08:41.301410 kernel: SCSI subsystem initialized Dec 13 02:08:41.321937 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:08:41.322018 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:08:41.324402 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:08:41.329890 systemd-modules-load[191]: Inserted module 'dm_multipath' Dec 13 02:08:41.330998 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:08:41.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.344515 kernel: audit: type=1130 audit(1734055721.339:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.344578 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:08:41.357843 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:08:41.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.383425 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:08:41.405411 kernel: iscsi: registered transport (tcp) Dec 13 02:08:41.432423 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:08:41.432501 kernel: QLogic iSCSI HBA Driver Dec 13 02:08:41.479594 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:08:41.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.481927 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:08:41.540443 kernel: raid6: avx2x4 gen() 18130 MB/s Dec 13 02:08:41.557435 kernel: raid6: avx2x4 xor() 7961 MB/s Dec 13 02:08:41.575428 kernel: raid6: avx2x2 gen() 18380 MB/s Dec 13 02:08:41.592437 kernel: raid6: avx2x2 xor() 18317 MB/s Dec 13 02:08:41.610443 kernel: raid6: avx2x1 gen() 14179 MB/s Dec 13 02:08:41.627445 kernel: raid6: avx2x1 xor() 15985 MB/s Dec 13 02:08:41.645440 kernel: raid6: sse2x4 gen() 10746 MB/s Dec 13 02:08:41.663435 kernel: raid6: sse2x4 xor() 6504 MB/s Dec 13 02:08:41.681435 kernel: raid6: sse2x2 gen() 11762 MB/s Dec 13 02:08:41.698429 kernel: raid6: sse2x2 xor() 7357 MB/s Dec 13 02:08:41.716425 kernel: raid6: sse2x1 gen() 10456 MB/s Dec 13 02:08:41.734718 kernel: raid6: sse2x1 xor() 5101 MB/s Dec 13 02:08:41.734806 kernel: raid6: using algorithm avx2x2 gen() 18380 MB/s Dec 13 02:08:41.734831 kernel: raid6: .... xor() 18317 MB/s, rmw enabled Dec 13 02:08:41.736077 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:08:41.751429 kernel: xor: automatically using best checksumming function avx Dec 13 02:08:41.861432 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:08:41.874213 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:08:41.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.873000 audit: BPF prog-id=7 op=LOAD Dec 13 02:08:41.874000 audit: BPF prog-id=8 op=LOAD Dec 13 02:08:41.876549 systemd[1]: Starting systemd-udevd.service... Dec 13 02:08:41.893563 systemd-udevd[389]: Using default interface naming scheme 'v252'. Dec 13 02:08:41.901531 systemd[1]: Started systemd-udevd.service. Dec 13 02:08:41.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.906521 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:08:41.929087 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Dec 13 02:08:41.969210 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:08:41.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:41.978686 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:08:42.044912 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:08:42.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:42.123422 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:08:42.168425 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:08:42.178446 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:08:42.188460 kernel: AES CTR mode by8 optimization enabled Dec 13 02:08:42.220418 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 02:08:42.306720 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 02:08:42.365980 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 02:08:42.366231 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 02:08:42.366488 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 02:08:42.366693 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:08:42.366895 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:08:42.366927 kernel: GPT:17805311 != 25165823 Dec 13 02:08:42.366949 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:08:42.366971 kernel: GPT:17805311 != 25165823 Dec 13 02:08:42.366993 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:08:42.367015 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:08:42.367038 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 02:08:42.425935 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:08:42.434696 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (437) Dec 13 02:08:42.449450 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:08:42.453878 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:08:42.466735 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:08:42.493150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:08:42.520620 systemd[1]: Starting disk-uuid.service... Dec 13 02:08:42.541753 disk-uuid[518]: Primary Header is updated. Dec 13 02:08:42.541753 disk-uuid[518]: Secondary Entries is updated. Dec 13 02:08:42.541753 disk-uuid[518]: Secondary Header is updated. Dec 13 02:08:42.567510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:08:42.584413 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:08:42.607419 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:08:43.603877 disk-uuid[519]: The operation has completed successfully. Dec 13 02:08:43.612536 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:08:43.668256 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:08:43.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:43.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:43.668438 systemd[1]: Finished disk-uuid.service. Dec 13 02:08:43.684013 systemd[1]: Starting verity-setup.service... Dec 13 02:08:43.713240 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:08:43.797060 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:08:43.798824 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:08:43.816753 systemd[1]: Finished verity-setup.service. Dec 13 02:08:43.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:43.902110 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:08:43.902012 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:08:43.909845 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:08:43.955286 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:08:43.955336 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:08:43.955360 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:08:43.910889 systemd[1]: Starting ignition-setup.service... Dec 13 02:08:43.973615 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:08:43.925836 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:08:43.996339 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:08:44.005495 systemd[1]: Finished ignition-setup.service. Dec 13 02:08:44.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.007173 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:08:44.055648 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:08:44.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.055000 audit: BPF prog-id=9 op=LOAD Dec 13 02:08:44.057681 systemd[1]: Starting systemd-networkd.service... Dec 13 02:08:44.091266 systemd-networkd[693]: lo: Link UP Dec 13 02:08:44.091282 systemd-networkd[693]: lo: Gained carrier Dec 13 02:08:44.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.092065 systemd-networkd[693]: Enumeration completed Dec 13 02:08:44.092222 systemd[1]: Started systemd-networkd.service. Dec 13 02:08:44.092637 systemd-networkd[693]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:08:44.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.095076 systemd-networkd[693]: eth0: Link UP Dec 13 02:08:44.170534 iscsid[702]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:08:44.170534 iscsid[702]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:08:44.170534 iscsid[702]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:08:44.170534 iscsid[702]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:08:44.170534 iscsid[702]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:08:44.170534 iscsid[702]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:08:44.170534 iscsid[702]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:08:44.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.095083 systemd-networkd[693]: eth0: Gained carrier Dec 13 02:08:44.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.105541 systemd-networkd[693]: eth0: DHCPv4 address 10.128.0.48/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:08:44.303810 ignition[649]: Ignition 2.14.0 Dec 13 02:08:44.106982 systemd[1]: Reached target network.target. Dec 13 02:08:44.303822 ignition[649]: Stage: fetch-offline Dec 13 02:08:44.116747 systemd[1]: Starting iscsiuio.service... Dec 13 02:08:44.303892 ignition[649]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:08:44.126476 systemd[1]: Started iscsiuio.service. Dec 13 02:08:44.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.303937 ignition[649]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:08:44.152858 systemd[1]: Starting iscsid.service... Dec 13 02:08:44.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.323593 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:08:44.208908 systemd[1]: Started iscsid.service. Dec 13 02:08:44.323791 ignition[649]: parsed url from cmdline: "" Dec 13 02:08:44.242130 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:08:44.323797 ignition[649]: no config URL provided Dec 13 02:08:44.260603 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:08:44.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.323805 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:08:44.294819 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:08:44.323815 ignition[649]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:08:44.322562 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:08:44.323824 ignition[649]: failed to fetch config: resource requires networking Dec 13 02:08:44.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.331827 systemd[1]: Reached target remote-fs.target. Dec 13 02:08:44.324091 ignition[649]: Ignition finished successfully Dec 13 02:08:44.351801 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:08:44.420583 ignition[717]: Ignition 2.14.0 Dec 13 02:08:44.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.373065 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:08:44.420592 ignition[717]: Stage: fetch Dec 13 02:08:44.390996 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:08:44.420724 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:08:44.407882 systemd[1]: Starting ignition-fetch.service... Dec 13 02:08:44.420755 ignition[717]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:08:44.445283 unknown[717]: fetched base config from "system" Dec 13 02:08:44.428720 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:08:44.445297 unknown[717]: fetched base config from "system" Dec 13 02:08:44.428913 ignition[717]: parsed url from cmdline: "" Dec 13 02:08:44.445308 unknown[717]: fetched user config from "gcp" Dec 13 02:08:44.428920 ignition[717]: no config URL provided Dec 13 02:08:44.447785 systemd[1]: Finished ignition-fetch.service. Dec 13 02:08:44.428927 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:08:44.461979 systemd[1]: Starting ignition-kargs.service... Dec 13 02:08:44.428937 ignition[717]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:08:44.495035 systemd[1]: Finished ignition-kargs.service. Dec 13 02:08:44.429017 ignition[717]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 02:08:44.516888 systemd[1]: Starting ignition-disks.service... Dec 13 02:08:44.437706 ignition[717]: GET result: OK Dec 13 02:08:44.540951 systemd[1]: Finished ignition-disks.service. Dec 13 02:08:44.437934 ignition[717]: parsing config with SHA512: aa95d7a3e17ec17989ee3ee2f057a7b99c6e6dee675d0c73c5c484545ed38bcaf5b5641a76ad93b83445b8c243e4be676864ca7047efd1f85e4e30c18456ff34 Dec 13 02:08:44.561913 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:08:44.446039 ignition[717]: fetch: fetch complete Dec 13 02:08:44.575676 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:08:44.446046 ignition[717]: fetch: fetch passed Dec 13 02:08:44.591667 systemd[1]: Reached target local-fs.target. Dec 13 02:08:44.446097 ignition[717]: Ignition finished successfully Dec 13 02:08:44.597711 systemd[1]: Reached target sysinit.target. Dec 13 02:08:44.476987 ignition[723]: Ignition 2.14.0 Dec 13 02:08:44.611738 systemd[1]: Reached target basic.target. Dec 13 02:08:44.476996 ignition[723]: Stage: kargs Dec 13 02:08:44.627927 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:08:44.477138 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:08:44.477172 ignition[723]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:08:44.485757 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:08:44.487170 ignition[723]: kargs: kargs passed Dec 13 02:08:44.487252 ignition[723]: Ignition finished successfully Dec 13 02:08:44.528708 ignition[729]: Ignition 2.14.0 Dec 13 02:08:44.528717 ignition[729]: Stage: disks Dec 13 02:08:44.528853 ignition[729]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:08:44.528883 ignition[729]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:08:44.537402 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:08:44.538929 ignition[729]: disks: disks passed Dec 13 02:08:44.538978 ignition[729]: Ignition finished successfully Dec 13 02:08:44.670237 systemd-fsck[737]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 02:08:44.854343 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:08:44.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:44.863739 systemd[1]: Mounting sysroot.mount... Dec 13 02:08:44.895544 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:08:44.891625 systemd[1]: Mounted sysroot.mount. Dec 13 02:08:44.902786 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:08:44.922706 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:08:44.940043 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:08:44.940121 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:08:44.940178 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:08:44.960983 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:08:44.988861 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:08:45.011434 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (743) Dec 13 02:08:45.028719 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:08:45.028808 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:08:45.028845 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:08:45.040673 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:08:45.062566 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:08:45.056752 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:08:45.071726 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:08:45.081709 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:08:45.100563 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:08:45.110518 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:08:45.143569 systemd-networkd[693]: eth0: Gained IPv6LL Dec 13 02:08:45.154168 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:08:45.193700 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 13 02:08:45.193740 kernel: audit: type=1130 audit(1734055725.152:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:45.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:45.155540 systemd[1]: Starting ignition-mount.service... Dec 13 02:08:45.201669 systemd[1]: Starting sysroot-boot.service... Dec 13 02:08:45.215704 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:08:45.215846 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:08:45.239556 ignition[808]: INFO : Ignition 2.14.0 Dec 13 02:08:45.239556 ignition[808]: INFO : Stage: mount Dec 13 02:08:45.239556 ignition[808]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:08:45.239556 ignition[808]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:08:45.335549 kernel: audit: type=1130 audit(1734055725.252:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:45.335603 kernel: audit: type=1130 audit(1734055725.289:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:45.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:45.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:45.335839 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:08:45.335839 ignition[808]: INFO : mount: mount passed Dec 13 02:08:45.335839 ignition[808]: INFO : Ignition finished successfully Dec 13 02:08:45.409703 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (818) Dec 13 02:08:45.409742 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:08:45.409758 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:08:45.409773 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:08:45.409787 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:08:45.247728 systemd[1]: Finished ignition-mount.service. Dec 13 02:08:45.256492 systemd[1]: Finished sysroot-boot.service. Dec 13 02:08:45.292201 systemd[1]: Starting ignition-files.service... Dec 13 02:08:45.439641 ignition[837]: INFO : Ignition 2.14.0 Dec 13 02:08:45.439641 ignition[837]: INFO : Stage: files Dec 13 02:08:45.439641 ignition[837]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:08:45.439641 ignition[837]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:08:45.346654 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:08:45.494619 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:08:45.494619 ignition[837]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:08:45.494619 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:08:45.494619 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:08:45.494619 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:08:45.494619 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:08:45.494619 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:08:45.494619 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:08:45.494619 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:08:45.494619 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:08:45.494619 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:08:45.405577 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:08:45.458639 unknown[837]: wrote ssh authorized keys file for user: core Dec 13 02:08:45.691018 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 02:08:45.840052 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:08:45.867560 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (837) Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2234251705" Dec 13 02:08:45.867611 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2234251705": device or resource busy Dec 13 02:08:45.867611 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2234251705", trying btrfs: device or resource busy Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2234251705" Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2234251705" Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem2234251705" Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem2234251705" Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:08:45.867611 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4231014840" Dec 13 02:08:46.096546 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4231014840": device or resource busy Dec 13 02:08:46.096546 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4231014840", trying btrfs: device or resource busy Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4231014840" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4231014840" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem4231014840" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem4231014840" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:08:46.096546 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1352229504" Dec 13 02:08:46.342568 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1352229504": device or resource busy Dec 13 02:08:46.342568 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1352229504", trying btrfs: device or resource busy Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1352229504" Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1352229504" Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem1352229504" Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem1352229504" Dec 13 02:08:46.342568 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:08:46.577584 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:08:46.577584 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:08:46.577584 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Dec 13 02:08:46.994094 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:08:46.994094 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:08:47.030574 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:08:47.030574 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4288623001" Dec 13 02:08:47.030574 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4288623001": device or resource busy Dec 13 02:08:47.030574 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4288623001", trying btrfs: device or resource busy Dec 13 02:08:47.030574 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4288623001" Dec 13 02:08:47.030574 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4288623001" Dec 13 02:08:47.030574 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem4288623001" Dec 13 02:08:47.030574 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem4288623001" Dec 13 02:08:47.030574 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:08:47.030574 ignition[837]: INFO : files: op(1c): [started] processing unit "oem-gce.service" Dec 13 02:08:47.030574 ignition[837]: INFO : files: op(1c): [finished] processing unit "oem-gce.service" Dec 13 02:08:47.030574 ignition[837]: INFO : files: op(1d): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:08:47.030574 ignition[837]: INFO : files: op(1d): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:08:47.030574 ignition[837]: INFO : files: op(1e): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:08:47.030574 ignition[837]: INFO : files: op(1e): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:08:47.030574 ignition[837]: INFO : files: op(1f): [started] processing unit "containerd.service" Dec 13 02:08:47.030574 ignition[837]: INFO : files: op(1f): op(20): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:08:47.483725 kernel: audit: type=1130 audit(1734055727.046:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.483773 kernel: audit: type=1130 audit(1734055727.138:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.483793 kernel: audit: type=1130 audit(1734055727.206:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.483808 kernel: audit: type=1131 audit(1734055727.206:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.483822 kernel: audit: type=1130 audit(1734055727.308:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.483836 kernel: audit: type=1131 audit(1734055727.308:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.483854 kernel: audit: type=1130 audit(1734055727.445:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.015330 systemd[1]: mnt-oem4288623001.mount: Deactivated successfully. Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(1f): op(20): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(1f): [finished] processing unit "containerd.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(21): [started] processing unit "prepare-helm.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(21): op(22): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(21): op(22): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(21): [finished] processing unit "prepare-helm.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(25): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(25): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:08:47.515550 ignition[837]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:08:47.515550 ignition[837]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:08:47.515550 ignition[837]: INFO : files: files passed Dec 13 02:08:47.515550 ignition[837]: INFO : Ignition finished successfully Dec 13 02:08:47.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.034105 systemd[1]: Finished ignition-files.service. Dec 13 02:08:47.058365 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:08:47.860723 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:08:47.089783 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:08:47.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.090933 systemd[1]: Starting ignition-quench.service... Dec 13 02:08:47.115092 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:08:47.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.140235 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:08:47.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.140408 systemd[1]: Finished ignition-quench.service. Dec 13 02:08:47.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.207844 systemd[1]: Reached target ignition-complete.target. Dec 13 02:08:47.265469 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:08:47.995589 ignition[875]: INFO : Ignition 2.14.0 Dec 13 02:08:47.995589 ignition[875]: INFO : Stage: umount Dec 13 02:08:47.995589 ignition[875]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:08:47.995589 ignition[875]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:08:47.995589 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:08:47.995589 ignition[875]: INFO : umount: umount passed Dec 13 02:08:47.995589 ignition[875]: INFO : Ignition finished successfully Dec 13 02:08:48.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.305803 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:08:48.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.305919 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:08:48.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.309966 systemd[1]: Reached target initrd-fs.target. Dec 13 02:08:47.366796 systemd[1]: Reached target initrd.target. Dec 13 02:08:48.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.401810 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:08:47.403308 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:08:47.429080 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:08:47.448289 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:08:47.517623 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:08:47.552894 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:08:47.563923 systemd[1]: Stopped target timers.target. Dec 13 02:08:47.587940 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:08:47.588124 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:08:47.610077 systemd[1]: Stopped target initrd.target. Dec 13 02:08:47.644877 systemd[1]: Stopped target basic.target. Dec 13 02:08:48.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.676892 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:08:48.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.690901 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:08:47.711919 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:08:47.750874 systemd[1]: Stopped target remote-fs.target. Dec 13 02:08:47.763906 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:08:48.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.805883 systemd[1]: Stopped target sysinit.target. Dec 13 02:08:48.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.812972 systemd[1]: Stopped target local-fs.target. Dec 13 02:08:48.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.404000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:08:47.834869 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:08:47.850762 systemd[1]: Stopped target swap.target. Dec 13 02:08:47.867690 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:08:48.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.867919 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:08:48.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.890926 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:08:48.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.915705 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:08:47.915928 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:08:47.930889 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:08:48.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.931094 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:08:47.948849 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:08:47.949048 systemd[1]: Stopped ignition-files.service. Dec 13 02:08:48.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.966349 systemd[1]: Stopping ignition-mount.service... Dec 13 02:08:48.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.987708 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:08:48.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:47.987988 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:08:48.005013 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:08:48.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.017537 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:08:48.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.017809 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:08:48.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:48.033817 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:08:48.034019 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:08:48.699000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:08:48.699000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:08:48.700000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:08:48.700000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:08:48.700000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:08:48.056571 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:08:48.730977 systemd-journald[190]: Failed to send stream file descriptor to service manager: Connection refused Dec 13 02:08:48.731079 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Dec 13 02:08:48.057765 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:08:48.740639 iscsid[702]: iscsid shutting down. Dec 13 02:08:48.057878 systemd[1]: Stopped ignition-mount.service. Dec 13 02:08:48.080276 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:08:48.080412 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:08:48.098356 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:08:48.098528 systemd[1]: Stopped ignition-disks.service. Dec 13 02:08:48.112725 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:08:48.112807 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:08:48.128659 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:08:48.128750 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:08:48.144696 systemd[1]: Stopped target network.target. Dec 13 02:08:48.157601 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:08:48.157836 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:08:48.172799 systemd[1]: Stopped target paths.target. Dec 13 02:08:48.186666 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:08:48.190522 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:08:48.198039 systemd[1]: Stopped target slices.target. Dec 13 02:08:48.226618 systemd[1]: Stopped target sockets.target. Dec 13 02:08:48.248809 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:08:48.248854 systemd[1]: Closed iscsid.socket. Dec 13 02:08:48.270920 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:08:48.271024 systemd[1]: Closed iscsiuio.socket. Dec 13 02:08:48.279863 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:08:48.279938 systemd[1]: Stopped ignition-setup.service. Dec 13 02:08:48.305803 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:08:48.305875 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:08:48.320932 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:08:48.324465 systemd-networkd[693]: eth0: DHCPv6 lease lost Dec 13 02:08:48.747000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:08:48.335768 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:08:48.360164 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:08:48.360289 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:08:48.375434 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:08:48.375574 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:08:48.391505 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:08:48.391618 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:08:48.406888 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:08:48.406944 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:08:48.421713 systemd[1]: Stopping network-cleanup.service... Dec 13 02:08:48.434540 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:08:48.434667 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:08:48.448746 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:08:48.448822 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:08:48.464840 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:08:48.464908 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:08:48.479846 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:08:48.502250 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:08:48.502946 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:08:48.503099 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:08:48.517197 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:08:48.517284 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:08:48.534803 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:08:48.534864 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:08:48.550635 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:08:48.550743 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:08:48.566686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:08:48.566775 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:08:48.581622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:08:48.581702 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:08:48.597754 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:08:48.614551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:08:48.614790 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:08:48.630442 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:08:48.630575 systemd[1]: Stopped network-cleanup.service. Dec 13 02:08:48.645015 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:08:48.645131 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:08:48.663001 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:08:48.679808 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:08:48.697918 systemd[1]: Switching root. Dec 13 02:08:48.751168 systemd-journald[190]: Journal stopped Dec 13 02:08:53.491694 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:08:53.491806 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:08:53.491831 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:08:53.491853 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:08:53.491875 kernel: SELinux: policy capability open_perms=1 Dec 13 02:08:53.491898 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:08:53.491930 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:08:53.491962 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:08:53.491984 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:08:53.492006 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:08:53.492027 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:08:53.492051 systemd[1]: Successfully loaded SELinux policy in 108.510ms. Dec 13 02:08:53.492093 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.727ms. Dec 13 02:08:53.492118 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:08:53.492143 systemd[1]: Detected virtualization kvm. Dec 13 02:08:53.492170 systemd[1]: Detected architecture x86-64. Dec 13 02:08:53.492193 systemd[1]: Detected first boot. Dec 13 02:08:53.492216 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:08:53.492239 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:08:53.492269 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:08:53.492294 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:08:53.492331 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:08:53.492359 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:08:53.492402 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:08:53.492425 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:08:53.492450 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:08:53.492474 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:08:53.492497 systemd[1]: Created slice system-getty.slice. Dec 13 02:08:53.492526 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:08:53.492551 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:08:53.492574 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:08:53.492601 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:08:53.492624 systemd[1]: Created slice user.slice. Dec 13 02:08:53.492649 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:08:53.492672 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:08:53.492696 systemd[1]: Set up automount boot.automount. Dec 13 02:08:53.492719 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:08:53.492742 systemd[1]: Reached target integritysetup.target. Dec 13 02:08:53.492765 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:08:53.492789 systemd[1]: Reached target remote-fs.target. Dec 13 02:08:53.492816 systemd[1]: Reached target slices.target. Dec 13 02:08:53.492839 systemd[1]: Reached target swap.target. Dec 13 02:08:53.492862 systemd[1]: Reached target torcx.target. Dec 13 02:08:53.492885 systemd[1]: Reached target veritysetup.target. Dec 13 02:08:53.492908 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:08:53.492930 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:08:53.492953 kernel: kauditd_printk_skb: 49 callbacks suppressed Dec 13 02:08:53.492982 kernel: audit: type=1400 audit(1734055733.002:87): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:08:53.493009 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:08:53.493034 kernel: audit: type=1335 audit(1734055733.002:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:08:53.493056 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:08:53.493079 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:08:53.493102 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:08:53.493126 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:08:53.493150 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:08:53.493173 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:08:53.493196 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:08:53.493222 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:08:53.493275 systemd[1]: Mounting media.mount... Dec 13 02:08:53.493299 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:53.493323 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:08:53.493346 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:08:53.493369 systemd[1]: Mounting tmp.mount... Dec 13 02:08:53.493417 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:08:53.493441 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:08:53.493464 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:08:53.493492 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:08:53.493516 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:08:53.493541 systemd[1]: Starting modprobe@drm.service... Dec 13 02:08:53.493564 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:08:53.493588 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:08:53.493612 systemd[1]: Starting modprobe@loop.service... Dec 13 02:08:53.493636 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:08:53.493659 kernel: loop: module loaded Dec 13 02:08:53.493682 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 02:08:53.493709 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 02:08:53.493732 kernel: fuse: init (API version 7.34) Dec 13 02:08:53.493754 systemd[1]: Starting systemd-journald.service... Dec 13 02:08:53.493777 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:08:53.493801 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:08:53.493825 kernel: audit: type=1305 audit(1734055733.480:89): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:08:53.493854 systemd-journald[1038]: Journal started Dec 13 02:08:53.493945 systemd-journald[1038]: Runtime Journal (/run/log/journal/6eaa6ad812f755edf60f82b4f2a89e0f) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:08:53.002000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:08:53.002000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:08:53.480000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:08:53.535646 kernel: audit: type=1300 audit(1734055733.480:89): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff94826ff0 a2=4000 a3=7fff9482708c items=0 ppid=1 pid=1038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:08:53.535763 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:08:53.535813 kernel: audit: type=1327 audit(1734055733.480:89): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:08:53.480000 audit[1038]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff94826ff0 a2=4000 a3=7fff9482708c items=0 ppid=1 pid=1038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:08:53.480000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:08:53.560412 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:08:53.581413 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:53.591437 systemd[1]: Started systemd-journald.service. Dec 13 02:08:53.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.601608 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:08:53.622418 kernel: audit: type=1130 audit(1734055733.598:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.628769 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:08:53.635710 systemd[1]: Mounted media.mount. Dec 13 02:08:53.642632 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:08:53.650638 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:08:53.659779 systemd[1]: Mounted tmp.mount. Dec 13 02:08:53.666960 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:08:53.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.676150 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:08:53.698457 kernel: audit: type=1130 audit(1734055733.674:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.707054 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:08:53.707339 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:08:53.729455 kernel: audit: type=1130 audit(1734055733.705:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.738242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:08:53.738553 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:08:53.781937 kernel: audit: type=1130 audit(1734055733.736:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.782063 kernel: audit: type=1131 audit(1734055733.736:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.791056 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:08:53.791299 systemd[1]: Finished modprobe@drm.service. Dec 13 02:08:53.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.800118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:08:53.800370 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:08:53.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.809034 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:08:53.809287 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:08:53.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.818138 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:08:53.818472 systemd[1]: Finished modprobe@loop.service. Dec 13 02:08:53.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.828130 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:08:53.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.837031 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:08:53.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.848125 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:08:53.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.857090 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:08:53.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.866246 systemd[1]: Reached target network-pre.target. Dec 13 02:08:53.876119 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:08:53.886294 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:08:53.893572 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:08:53.897024 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:08:53.906719 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:08:53.913175 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:08:53.915239 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:08:53.918002 systemd-journald[1038]: Time spent on flushing to /var/log/journal/6eaa6ad812f755edf60f82b4f2a89e0f is 71.216ms for 1089 entries. Dec 13 02:08:53.918002 systemd-journald[1038]: System Journal (/var/log/journal/6eaa6ad812f755edf60f82b4f2a89e0f) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:08:54.031166 systemd-journald[1038]: Received client request to flush runtime journal. Dec 13 02:08:53.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:54.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:53.930615 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:08:53.932679 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:08:53.943628 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:08:53.952594 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:08:53.963302 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:08:54.032619 udevadm[1061]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:08:53.971731 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:08:53.981062 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:08:53.997460 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:08:54.007233 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:08:54.026494 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:08:54.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:54.035576 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:08:54.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:54.046967 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:08:54.110038 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:08:54.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:54.673816 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:08:54.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:54.685180 systemd[1]: Starting systemd-udevd.service... Dec 13 02:08:54.710659 systemd-udevd[1072]: Using default interface naming scheme 'v252'. Dec 13 02:08:54.763399 systemd[1]: Started systemd-udevd.service. Dec 13 02:08:54.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:54.776169 systemd[1]: Starting systemd-networkd.service... Dec 13 02:08:54.795619 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:08:54.841281 systemd[1]: Found device dev-ttyS0.device. Dec 13 02:08:54.890472 systemd[1]: Started systemd-userdbd.service. Dec 13 02:08:54.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:54.971426 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:08:55.073174 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1079) Dec 13 02:08:55.073287 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:08:55.075785 systemd-networkd[1083]: lo: Link UP Dec 13 02:08:55.075803 systemd-networkd[1083]: lo: Gained carrier Dec 13 02:08:55.076611 systemd-networkd[1083]: Enumeration completed Dec 13 02:08:55.076774 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:08:55.082461 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 02:08:55.078823 systemd-networkd[1083]: eth0: Link UP Dec 13 02:08:55.078831 systemd-networkd[1083]: eth0: Gained carrier Dec 13 02:08:55.081747 systemd[1]: Started systemd-networkd.service. Dec 13 02:08:55.088447 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:08:55.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.097000 audit[1078]: AVC avc: denied { confidentiality } for pid=1078 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:08:55.095583 systemd-networkd[1083]: eth0: DHCPv4 address 10.128.0.48/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:08:55.097000 audit[1078]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a58b5f1530 a1=337fc a2=7eff494cabc5 a3=5 items=110 ppid=1072 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:08:55.097000 audit: CWD cwd="/" Dec 13 02:08:55.097000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=1 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=2 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=3 name=(null) inode=14594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=4 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=5 name=(null) inode=14595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=6 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=7 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=8 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=9 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=10 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=11 name=(null) inode=14598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=12 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=13 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=14 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=15 name=(null) inode=14600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=16 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=17 name=(null) inode=14601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=18 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=19 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=20 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=21 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=22 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=23 name=(null) inode=14604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=24 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=25 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=26 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=27 name=(null) inode=14606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=28 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=29 name=(null) inode=14607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=30 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=31 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=32 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=33 name=(null) inode=14609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=34 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=35 name=(null) inode=14610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=36 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=37 name=(null) inode=14611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=38 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=39 name=(null) inode=14612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=40 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=41 name=(null) inode=14613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=42 name=(null) inode=14593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=43 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=44 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=45 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=46 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=47 name=(null) inode=14616 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=48 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=49 name=(null) inode=14617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=50 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=51 name=(null) inode=14618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=52 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=53 name=(null) inode=14619 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=55 name=(null) inode=14620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=56 name=(null) inode=14620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=57 name=(null) inode=14621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=58 name=(null) inode=14620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=59 name=(null) inode=14622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=60 name=(null) inode=14620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=61 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=62 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=63 name=(null) inode=14624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=64 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=65 name=(null) inode=14625 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=66 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=67 name=(null) inode=14626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=68 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=69 name=(null) inode=14627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=70 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=71 name=(null) inode=14628 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=72 name=(null) inode=14620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=73 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=74 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=75 name=(null) inode=14630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=76 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=77 name=(null) inode=14631 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=78 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=79 name=(null) inode=14632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=80 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=81 name=(null) inode=14633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=82 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=83 name=(null) inode=14634 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=84 name=(null) inode=14620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=85 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=86 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=87 name=(null) inode=14636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=88 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=89 name=(null) inode=14637 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=90 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=91 name=(null) inode=14638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=92 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=93 name=(null) inode=14639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=94 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=95 name=(null) inode=14640 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=96 name=(null) inode=14620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=97 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=98 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=99 name=(null) inode=14642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=100 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=101 name=(null) inode=14643 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=102 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=103 name=(null) inode=14644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=104 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=105 name=(null) inode=14645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=106 name=(null) inode=14641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=107 name=(null) inode=14646 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PATH item=109 name=(null) inode=14647 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:08:55.097000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:08:55.127417 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 02:08:55.160090 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 02:08:55.164439 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:08:55.185448 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:08:55.203518 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:08:55.234328 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:08:55.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.244447 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:08:55.273810 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:08:55.308025 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:08:55.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.317140 systemd[1]: Reached target cryptsetup.target. Dec 13 02:08:55.327157 systemd[1]: Starting lvm2-activation.service... Dec 13 02:08:55.333456 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:08:55.359513 systemd[1]: Finished lvm2-activation.service. Dec 13 02:08:55.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.367935 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:08:55.376600 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:08:55.376661 systemd[1]: Reached target local-fs.target. Dec 13 02:08:55.385578 systemd[1]: Reached target machines.target. Dec 13 02:08:55.395273 systemd[1]: Starting ldconfig.service... Dec 13 02:08:55.403645 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:08:55.403743 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:08:55.406292 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:08:55.416840 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:08:55.428920 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:08:55.440191 systemd[1]: Starting systemd-sysext.service... Dec 13 02:08:55.441080 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1115 (bootctl) Dec 13 02:08:55.444274 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:08:55.471827 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:08:55.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.479880 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:08:55.484873 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:08:55.485475 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:08:55.516571 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:08:55.611258 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:08:55.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.613361 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:08:55.626925 systemd-fsck[1127]: fsck.fat 4.2 (2021-01-31) Dec 13 02:08:55.626925 systemd-fsck[1127]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:08:55.629985 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:08:55.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.643699 systemd[1]: Mounting boot.mount... Dec 13 02:08:55.665446 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:08:55.670923 systemd[1]: Mounted boot.mount. Dec 13 02:08:55.697453 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:08:55.707703 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:08:55.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.727455 (sd-sysext)[1136]: Using extensions 'kubernetes'. Dec 13 02:08:55.729606 (sd-sysext)[1136]: Merged extensions into '/usr'. Dec 13 02:08:55.758809 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:55.761093 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:08:55.768908 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:08:55.770992 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:08:55.780647 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:08:55.790311 systemd[1]: Starting modprobe@loop.service... Dec 13 02:08:55.797666 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:08:55.797943 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:08:55.798161 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:55.805340 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:08:55.813240 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:08:55.813555 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:08:55.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.822595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:08:55.822850 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:08:55.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.833319 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:08:55.833548 systemd[1]: Finished modprobe@loop.service. Dec 13 02:08:55.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.844599 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:08:55.844746 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:08:55.846513 systemd[1]: Finished systemd-sysext.service. Dec 13 02:08:55.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:55.858734 systemd[1]: Starting ensure-sysext.service... Dec 13 02:08:55.867985 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:08:55.880687 systemd[1]: Reloading. Dec 13 02:08:55.892115 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:08:55.898093 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:08:55.903153 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:08:56.048855 /usr/lib/systemd/system-generators/torcx-generator[1171]: time="2024-12-13T02:08:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:08:56.050507 /usr/lib/systemd/system-generators/torcx-generator[1171]: time="2024-12-13T02:08:56Z" level=info msg="torcx already run" Dec 13 02:08:56.128903 ldconfig[1114]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:08:56.244186 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:08:56.244216 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:08:56.269520 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:08:56.278636 systemd-networkd[1083]: eth0: Gained IPv6LL Dec 13 02:08:56.355356 systemd[1]: Finished ldconfig.service. Dec 13 02:08:56.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:56.364271 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:08:56.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:56.379066 systemd[1]: Starting audit-rules.service... Dec 13 02:08:56.388779 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:08:56.400201 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:08:56.411178 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:08:56.423014 systemd[1]: Starting systemd-resolved.service... Dec 13 02:08:56.433888 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:08:56.443197 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:08:56.452745 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:08:56.456000 audit[1247]: SYSTEM_BOOT pid=1247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:08:56.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:56.462457 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:08:56.462862 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:08:56.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:56.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:08:56.480724 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:08:56.482153 augenrules[1255]: No rules Dec 13 02:08:56.480000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:08:56.480000 audit[1255]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd009937b0 a2=420 a3=0 items=0 ppid=1223 pid=1255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:08:56.480000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:08:56.494997 systemd[1]: Finished audit-rules.service. Dec 13 02:08:56.506893 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:56.507470 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:08:56.509759 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:08:56.518546 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:08:56.528569 systemd[1]: Starting modprobe@loop.service... Dec 13 02:08:56.537947 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:08:56.546593 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:08:56.546967 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:08:56.549728 systemd[1]: Starting systemd-update-done.service... Dec 13 02:08:56.552049 enable-oslogin[1269]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:08:56.556517 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:08:56.556726 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:56.559742 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:08:56.569352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:08:56.569639 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:08:56.579307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:08:56.579718 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:08:56.589330 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:08:56.589631 systemd[1]: Finished modprobe@loop.service. Dec 13 02:08:56.598302 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:08:56.598728 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:08:56.608454 systemd[1]: Finished systemd-update-done.service. Dec 13 02:08:56.619121 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:08:56.619364 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:08:56.625147 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:56.625657 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:08:56.628078 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:08:56.637734 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:08:56.646820 systemd[1]: Starting modprobe@loop.service... Dec 13 02:08:56.655786 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:08:56.661913 enable-oslogin[1281]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:08:56.664590 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:08:56.664851 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:08:56.665039 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:08:56.665202 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:56.667629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:08:56.667905 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:08:56.677304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:08:56.677602 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:08:56.687376 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:08:56.687705 systemd[1]: Finished modprobe@loop.service. Dec 13 02:08:56.697188 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:08:56.697545 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:08:56.706349 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:08:56.706572 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:08:56.711837 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:56.711919 systemd-resolved[1239]: Positive Trust Anchors: Dec 13 02:08:56.711938 systemd-resolved[1239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:08:56.712001 systemd-resolved[1239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:08:56.712878 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:08:56.715564 systemd-timesyncd[1244]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 02:08:56.715644 systemd-timesyncd[1244]: Initial clock synchronization to Fri 2024-12-13 02:08:56.685136 UTC. Dec 13 02:08:56.715865 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:08:56.725401 systemd[1]: Starting modprobe@drm.service... Dec 13 02:08:56.734576 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:08:56.743709 systemd[1]: Starting modprobe@loop.service... Dec 13 02:08:56.753645 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:08:56.760944 systemd-resolved[1239]: Defaulting to hostname 'linux'. Dec 13 02:08:56.761843 enable-oslogin[1293]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:08:56.762661 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:08:56.762932 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:08:56.765743 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:08:56.774625 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:08:56.774885 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:08:56.776478 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:08:56.786122 systemd[1]: Started systemd-resolved.service. Dec 13 02:08:56.795209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:08:56.795510 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:08:56.804281 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:08:56.804573 systemd[1]: Finished modprobe@drm.service. Dec 13 02:08:56.813189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:08:56.813470 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:08:56.822174 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:08:56.822503 systemd[1]: Finished modprobe@loop.service. Dec 13 02:08:56.831196 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:08:56.831580 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:08:56.841665 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:08:56.852635 systemd[1]: Reached target network.target. Dec 13 02:08:56.861650 systemd[1]: Reached target network-online.target. Dec 13 02:08:56.870566 systemd[1]: Reached target nss-lookup.target. Dec 13 02:08:56.879569 systemd[1]: Reached target time-set.target. Dec 13 02:08:56.887599 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:08:56.887665 systemd[1]: Reached target sysinit.target. Dec 13 02:08:56.896669 systemd[1]: Started motdgen.path. Dec 13 02:08:56.903613 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:08:56.913789 systemd[1]: Started logrotate.timer. Dec 13 02:08:56.920721 systemd[1]: Started mdadm.timer. Dec 13 02:08:56.927576 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:08:56.936600 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:08:56.936669 systemd[1]: Reached target paths.target. Dec 13 02:08:56.943563 systemd[1]: Reached target timers.target. Dec 13 02:08:56.951359 systemd[1]: Listening on dbus.socket. Dec 13 02:08:56.960130 systemd[1]: Starting docker.socket... Dec 13 02:08:56.969871 systemd[1]: Listening on sshd.socket. Dec 13 02:08:56.976892 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:08:56.977022 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:08:56.978231 systemd[1]: Finished ensure-sysext.service. Dec 13 02:08:56.986827 systemd[1]: Listening on docker.socket. Dec 13 02:08:56.994679 systemd[1]: Reached target sockets.target. Dec 13 02:08:57.003549 systemd[1]: Reached target basic.target. Dec 13 02:08:57.010804 systemd[1]: System is tainted: cgroupsv1 Dec 13 02:08:57.010887 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:08:57.010924 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:08:57.012616 systemd[1]: Starting containerd.service... Dec 13 02:08:57.021220 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:08:57.031620 systemd[1]: Starting dbus.service... Dec 13 02:08:57.042008 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:08:57.052091 systemd[1]: Starting extend-filesystems.service... Dec 13 02:08:57.066319 jq[1305]: false Dec 13 02:08:57.059546 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:08:57.062122 systemd[1]: Starting kubelet.service... Dec 13 02:08:57.072575 systemd[1]: Starting motdgen.service... Dec 13 02:08:57.082874 systemd[1]: Starting oem-gce.service... Dec 13 02:08:57.098742 systemd[1]: Starting prepare-helm.service... Dec 13 02:08:57.109085 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:08:57.113926 extend-filesystems[1306]: Found loop1 Dec 13 02:08:57.113926 extend-filesystems[1306]: Found sda Dec 13 02:08:57.113926 extend-filesystems[1306]: Found sda1 Dec 13 02:08:57.113926 extend-filesystems[1306]: Found sda2 Dec 13 02:08:57.119003 systemd[1]: Starting sshd-keygen.service... Dec 13 02:08:57.209322 extend-filesystems[1306]: Found sda3 Dec 13 02:08:57.209322 extend-filesystems[1306]: Found usr Dec 13 02:08:57.209322 extend-filesystems[1306]: Found sda4 Dec 13 02:08:57.209322 extend-filesystems[1306]: Found sda6 Dec 13 02:08:57.209322 extend-filesystems[1306]: Found sda7 Dec 13 02:08:57.209322 extend-filesystems[1306]: Found sda9 Dec 13 02:08:57.209322 extend-filesystems[1306]: Checking size of /dev/sda9 Dec 13 02:08:57.295633 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 02:08:57.130244 systemd[1]: Starting systemd-logind.service... Dec 13 02:08:57.296100 extend-filesystems[1306]: Resized partition /dev/sda9 Dec 13 02:08:57.137555 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:08:57.307293 extend-filesystems[1354]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:08:57.137685 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 02:08:57.140536 systemd[1]: Starting update-engine.service... Dec 13 02:08:57.315228 jq[1333]: true Dec 13 02:08:57.150419 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:08:57.162996 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:08:57.163447 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:08:57.169889 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:08:57.170349 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:08:57.318311 mkfs.ext4[1344]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 02:08:57.318311 mkfs.ext4[1344]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 02:08:57.318311 mkfs.ext4[1344]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 02:08:57.318311 mkfs.ext4[1344]: Filesystem UUID: 0d7c2ead-1494-450d-a916-cf548a909013 Dec 13 02:08:57.318311 mkfs.ext4[1344]: Superblock backups stored on blocks: Dec 13 02:08:57.318311 mkfs.ext4[1344]: 32768, 98304, 163840, 229376 Dec 13 02:08:57.318311 mkfs.ext4[1344]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:08:57.318311 mkfs.ext4[1344]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:08:57.318311 mkfs.ext4[1344]: Creating journal (8192 blocks): done Dec 13 02:08:57.318311 mkfs.ext4[1344]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:08:57.219505 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:08:57.220087 systemd[1]: Finished motdgen.service. Dec 13 02:08:57.349354 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 02:08:57.349523 jq[1347]: true Dec 13 02:08:57.351505 extend-filesystems[1354]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:08:57.351505 extend-filesystems[1354]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 02:08:57.351505 extend-filesystems[1354]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 02:08:57.399602 extend-filesystems[1306]: Resized filesystem in /dev/sda9 Dec 13 02:08:57.352496 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:08:57.352940 systemd[1]: Finished extend-filesystems.service. Dec 13 02:08:57.415052 dbus-daemon[1304]: [system] SELinux support is enabled Dec 13 02:08:57.415360 systemd[1]: Started dbus.service. Dec 13 02:08:57.416571 umount[1374]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 02:08:57.428724 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:08:57.428804 systemd[1]: Reached target system-config.target. Dec 13 02:08:57.438627 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:08:57.438668 systemd[1]: Reached target user-config.target. Dec 13 02:08:57.447816 tar[1340]: linux-amd64/helm Dec 13 02:08:57.449903 update_engine[1332]: I1213 02:08:57.449369 1332 main.cc:92] Flatcar Update Engine starting Dec 13 02:08:57.455045 dbus-daemon[1304]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1083 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:08:57.460959 dbus-daemon[1304]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:08:57.467078 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:08:57.487422 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:08:57.492095 update_engine[1332]: I1213 02:08:57.491148 1332 update_check_scheduler.cc:74] Next update check in 2m6s Dec 13 02:08:57.495006 systemd[1]: Started update-engine.service. Dec 13 02:08:57.497481 env[1343]: time="2024-12-13T02:08:57.497417966Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:08:57.504023 systemd[1]: Started locksmithd.service. Dec 13 02:08:57.516454 bash[1384]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:08:57.517667 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:08:57.570783 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:08:57.666974 coreos-metadata[1303]: Dec 13 02:08:57.666 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 02:08:57.674476 coreos-metadata[1303]: Dec 13 02:08:57.674 INFO Fetch failed with 404: resource not found Dec 13 02:08:57.674622 coreos-metadata[1303]: Dec 13 02:08:57.674 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 02:08:57.676515 coreos-metadata[1303]: Dec 13 02:08:57.676 INFO Fetch successful Dec 13 02:08:57.676622 coreos-metadata[1303]: Dec 13 02:08:57.676 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 02:08:57.679579 coreos-metadata[1303]: Dec 13 02:08:57.678 INFO Fetch failed with 404: resource not found Dec 13 02:08:57.679579 coreos-metadata[1303]: Dec 13 02:08:57.678 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 02:08:57.679579 coreos-metadata[1303]: Dec 13 02:08:57.679 INFO Fetch failed with 404: resource not found Dec 13 02:08:57.679579 coreos-metadata[1303]: Dec 13 02:08:57.679 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 02:08:57.678300 systemd-logind[1329]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:08:57.678414 systemd-logind[1329]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:08:57.678457 systemd-logind[1329]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:08:57.679453 systemd-logind[1329]: New seat seat0. Dec 13 02:08:57.685260 coreos-metadata[1303]: Dec 13 02:08:57.680 INFO Fetch successful Dec 13 02:08:57.683405 unknown[1303]: wrote ssh authorized keys file for user: core Dec 13 02:08:57.704750 systemd[1]: Started systemd-logind.service. Dec 13 02:08:57.717813 update-ssh-keys[1397]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:08:57.718566 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:08:57.725983 env[1343]: time="2024-12-13T02:08:57.725866320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:08:57.732966 env[1343]: time="2024-12-13T02:08:57.732905140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:08:57.744315 env[1343]: time="2024-12-13T02:08:57.744255136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:08:57.744315 env[1343]: time="2024-12-13T02:08:57.744311931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:08:57.744801 env[1343]: time="2024-12-13T02:08:57.744762232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:08:57.744888 env[1343]: time="2024-12-13T02:08:57.744802798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:08:57.744888 env[1343]: time="2024-12-13T02:08:57.744825864Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:08:57.744888 env[1343]: time="2024-12-13T02:08:57.744842266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:08:57.745022 env[1343]: time="2024-12-13T02:08:57.744972182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:08:57.745329 env[1343]: time="2024-12-13T02:08:57.745297043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:08:57.745649 env[1343]: time="2024-12-13T02:08:57.745614704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:08:57.745727 env[1343]: time="2024-12-13T02:08:57.745650350Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:08:57.745781 env[1343]: time="2024-12-13T02:08:57.745731518Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:08:57.745781 env[1343]: time="2024-12-13T02:08:57.745752513Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:08:57.754942 env[1343]: time="2024-12-13T02:08:57.754896675Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:08:57.755069 env[1343]: time="2024-12-13T02:08:57.754968157Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:08:57.755069 env[1343]: time="2024-12-13T02:08:57.754990360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:08:57.755171 env[1343]: time="2024-12-13T02:08:57.755075287Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:08:57.755171 env[1343]: time="2024-12-13T02:08:57.755131724Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:08:57.755171 env[1343]: time="2024-12-13T02:08:57.755157975Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:08:57.755324 env[1343]: time="2024-12-13T02:08:57.755243473Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:08:57.755324 env[1343]: time="2024-12-13T02:08:57.755288991Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:08:57.755324 env[1343]: time="2024-12-13T02:08:57.755312937Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:08:57.755492 env[1343]: time="2024-12-13T02:08:57.755335440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:08:57.755492 env[1343]: time="2024-12-13T02:08:57.755374599Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:08:57.755492 env[1343]: time="2024-12-13T02:08:57.755435693Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:08:57.755816 env[1343]: time="2024-12-13T02:08:57.755758724Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:08:57.756065 env[1343]: time="2024-12-13T02:08:57.756035216Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:08:57.756837 env[1343]: time="2024-12-13T02:08:57.756799051Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:08:57.756941 env[1343]: time="2024-12-13T02:08:57.756876871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.756941 env[1343]: time="2024-12-13T02:08:57.756903041Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:08:57.757213 env[1343]: time="2024-12-13T02:08:57.756998053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757296 env[1343]: time="2024-12-13T02:08:57.757230617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757296 env[1343]: time="2024-12-13T02:08:57.757279556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757417 env[1343]: time="2024-12-13T02:08:57.757306005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757417 env[1343]: time="2024-12-13T02:08:57.757345869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757417 env[1343]: time="2024-12-13T02:08:57.757369436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757552 env[1343]: time="2024-12-13T02:08:57.757419139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757552 env[1343]: time="2024-12-13T02:08:57.757441940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757552 env[1343]: time="2024-12-13T02:08:57.757490903Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:08:57.757778 env[1343]: time="2024-12-13T02:08:57.757752224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757848 env[1343]: time="2024-12-13T02:08:57.757786792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757848 env[1343]: time="2024-12-13T02:08:57.757837890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.757954 env[1343]: time="2024-12-13T02:08:57.757860863Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:08:57.757954 env[1343]: time="2024-12-13T02:08:57.757907103Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:08:57.757954 env[1343]: time="2024-12-13T02:08:57.757929860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:08:57.758091 env[1343]: time="2024-12-13T02:08:57.757958654Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:08:57.758091 env[1343]: time="2024-12-13T02:08:57.758038732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:08:57.758543 env[1343]: time="2024-12-13T02:08:57.758439575Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:08:57.761950 env[1343]: time="2024-12-13T02:08:57.758738483Z" level=info msg="Connect containerd service" Dec 13 02:08:57.761950 env[1343]: time="2024-12-13T02:08:57.760252139Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:08:57.795493 env[1343]: time="2024-12-13T02:08:57.795425627Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:08:57.796042 env[1343]: time="2024-12-13T02:08:57.795974191Z" level=info msg="Start subscribing containerd event" Dec 13 02:08:57.801681 env[1343]: time="2024-12-13T02:08:57.801622441Z" level=info msg="Start recovering state" Dec 13 02:08:57.808436 env[1343]: time="2024-12-13T02:08:57.808366226Z" level=info msg="Start event monitor" Dec 13 02:08:57.808678 env[1343]: time="2024-12-13T02:08:57.808653962Z" level=info msg="Start snapshots syncer" Dec 13 02:08:57.808807 env[1343]: time="2024-12-13T02:08:57.808786969Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:08:57.808919 env[1343]: time="2024-12-13T02:08:57.808901525Z" level=info msg="Start streaming server" Dec 13 02:08:57.809191 env[1343]: time="2024-12-13T02:08:57.801570871Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:08:57.809432 env[1343]: time="2024-12-13T02:08:57.809375094Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:08:57.809767 systemd[1]: Started containerd.service. Dec 13 02:08:57.810128 env[1343]: time="2024-12-13T02:08:57.810101531Z" level=info msg="containerd successfully booted in 0.319716s" Dec 13 02:08:58.017442 dbus-daemon[1304]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:08:58.017659 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:08:58.018779 dbus-daemon[1304]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1385 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:08:58.031780 systemd[1]: Starting polkit.service... Dec 13 02:08:58.091003 polkitd[1405]: Started polkitd version 121 Dec 13 02:08:58.115945 polkitd[1405]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:08:58.116243 polkitd[1405]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:08:58.118978 polkitd[1405]: Finished loading, compiling and executing 2 rules Dec 13 02:08:58.119793 dbus-daemon[1304]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:08:58.120030 systemd[1]: Started polkit.service. Dec 13 02:08:58.120657 polkitd[1405]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:08:58.139968 systemd-hostnamed[1385]: Hostname set to (transient) Dec 13 02:08:58.143218 systemd-resolved[1239]: System hostname changed to 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal'. Dec 13 02:08:59.074087 tar[1340]: linux-amd64/LICENSE Dec 13 02:08:59.077962 tar[1340]: linux-amd64/README.md Dec 13 02:08:59.096109 systemd[1]: Finished prepare-helm.service. Dec 13 02:08:59.245858 sshd_keygen[1342]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:08:59.347204 systemd[1]: Finished sshd-keygen.service. Dec 13 02:08:59.357327 systemd[1]: Starting issuegen.service... Dec 13 02:08:59.370612 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:08:59.370991 systemd[1]: Finished issuegen.service. Dec 13 02:08:59.382605 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:08:59.397755 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:08:59.409094 systemd[1]: Started getty@tty1.service. Dec 13 02:08:59.419041 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:08:59.428012 systemd[1]: Reached target getty.target. Dec 13 02:08:59.494975 systemd[1]: Started kubelet.service. Dec 13 02:08:59.521225 locksmithd[1387]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:09:00.743623 kubelet[1440]: E1213 02:09:00.743524 1440 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:00.746893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:00.747155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:02.964945 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 02:09:05.116424 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:09:05.137704 systemd-nspawn[1449]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 02:09:05.137704 systemd-nspawn[1449]: Press ^] three times within 1s to kill container. Dec 13 02:09:05.153426 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:09:05.231622 systemd[1]: Started oem-gce.service. Dec 13 02:09:05.232073 systemd[1]: Reached target multi-user.target. Dec 13 02:09:05.234720 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:09:05.246527 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:09:05.246912 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:09:05.247189 systemd[1]: Startup finished in 9.281s (kernel) + 16.210s (userspace) = 25.491s. Dec 13 02:09:05.288893 systemd-nspawn[1449]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 02:09:05.289285 systemd-nspawn[1449]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 02:09:05.289285 systemd-nspawn[1449]: + /usr/bin/google_instance_setup Dec 13 02:09:05.886648 systemd[1]: Created slice system-sshd.slice. Dec 13 02:09:05.888927 systemd[1]: Started sshd@0-10.128.0.48:22-139.178.68.195:54930.service. Dec 13 02:09:05.889252 instance-setup[1457]: INFO Running google_set_multiqueue. Dec 13 02:09:05.918666 instance-setup[1457]: INFO Set channels for eth0 to 2. Dec 13 02:09:05.924825 instance-setup[1457]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 02:09:05.925300 instance-setup[1457]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 02:09:05.925657 instance-setup[1457]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 02:09:05.926575 instance-setup[1457]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 02:09:05.926894 instance-setup[1457]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 02:09:05.927448 instance-setup[1457]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 02:09:05.927742 instance-setup[1457]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 02:09:05.928778 instance-setup[1457]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 02:09:05.941299 instance-setup[1457]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 02:09:05.941694 instance-setup[1457]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 02:09:05.989677 systemd-nspawn[1449]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 02:09:06.225400 sshd[1466]: Accepted publickey for core from 139.178.68.195 port 54930 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:09:06.229610 sshd[1466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:06.252020 systemd[1]: Created slice user-500.slice. Dec 13 02:09:06.254888 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:09:06.272448 systemd-logind[1329]: New session 1 of user core. Dec 13 02:09:06.284282 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:09:06.288160 systemd[1]: Starting user@500.service... Dec 13 02:09:06.311707 (systemd)[1496]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:06.403166 startup-script[1490]: INFO Starting startup scripts. Dec 13 02:09:06.420453 startup-script[1490]: INFO No startup scripts found in metadata. Dec 13 02:09:06.420623 startup-script[1490]: INFO Finished running startup scripts. Dec 13 02:09:06.467396 systemd-nspawn[1449]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 02:09:06.467396 systemd-nspawn[1449]: + daemon_pids=() Dec 13 02:09:06.467849 systemd-nspawn[1449]: + for d in accounts clock_skew network Dec 13 02:09:06.468217 systemd-nspawn[1449]: + daemon_pids+=($!) Dec 13 02:09:06.468419 systemd-nspawn[1449]: + for d in accounts clock_skew network Dec 13 02:09:06.468498 systemd-nspawn[1449]: + /usr/bin/google_accounts_daemon Dec 13 02:09:06.468808 systemd-nspawn[1449]: + daemon_pids+=($!) Dec 13 02:09:06.468973 systemd-nspawn[1449]: + for d in accounts clock_skew network Dec 13 02:09:06.469050 systemd-nspawn[1449]: + /usr/bin/google_clock_skew_daemon Dec 13 02:09:06.469421 systemd-nspawn[1449]: + daemon_pids+=($!) Dec 13 02:09:06.469629 systemd-nspawn[1449]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 02:09:06.469629 systemd-nspawn[1449]: + /usr/bin/systemd-notify --ready Dec 13 02:09:06.470881 systemd-nspawn[1449]: + /usr/bin/google_network_daemon Dec 13 02:09:06.474846 systemd[1496]: Queued start job for default target default.target. Dec 13 02:09:06.475259 systemd[1496]: Reached target paths.target. Dec 13 02:09:06.475291 systemd[1496]: Reached target sockets.target. Dec 13 02:09:06.475313 systemd[1496]: Reached target timers.target. Dec 13 02:09:06.475334 systemd[1496]: Reached target basic.target. Dec 13 02:09:06.475434 systemd[1496]: Reached target default.target. Dec 13 02:09:06.475494 systemd[1496]: Startup finished in 152ms. Dec 13 02:09:06.475587 systemd[1]: Started user@500.service. Dec 13 02:09:06.477285 systemd[1]: Started session-1.scope. Dec 13 02:09:06.531649 systemd-nspawn[1449]: + wait -n 36 37 38 Dec 13 02:09:06.701206 systemd[1]: Started sshd@1-10.128.0.48:22-139.178.68.195:48286.service. Dec 13 02:09:07.020875 sshd[1509]: Accepted publickey for core from 139.178.68.195 port 48286 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:09:07.021904 sshd[1509]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:07.030279 systemd[1]: Started session-2.scope. Dec 13 02:09:07.032163 systemd-logind[1329]: New session 2 of user core. Dec 13 02:09:07.242623 sshd[1509]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:07.247677 systemd[1]: sshd@1-10.128.0.48:22-139.178.68.195:48286.service: Deactivated successfully. Dec 13 02:09:07.248937 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:09:07.251496 systemd-logind[1329]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:09:07.253086 systemd-logind[1329]: Removed session 2. Dec 13 02:09:07.267825 google-networking[1504]: INFO Starting Google Networking daemon. Dec 13 02:09:07.275404 google-clock-skew[1503]: INFO Starting Google Clock Skew daemon. Dec 13 02:09:07.284737 systemd[1]: Started sshd@2-10.128.0.48:22-139.178.68.195:48298.service. Dec 13 02:09:07.297674 google-clock-skew[1503]: INFO Clock drift token has changed: 0. Dec 13 02:09:07.311651 systemd-nspawn[1449]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 02:09:07.311928 systemd-nspawn[1449]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 02:09:07.312910 google-clock-skew[1503]: WARNING Failed to sync system time with hardware clock. Dec 13 02:09:07.398272 groupadd[1526]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 02:09:07.402482 groupadd[1526]: group added to /etc/gshadow: name=google-sudoers Dec 13 02:09:07.406838 groupadd[1526]: new group: name=google-sudoers, GID=1000 Dec 13 02:09:07.418898 google-accounts[1502]: INFO Starting Google Accounts daemon. Dec 13 02:09:07.444655 google-accounts[1502]: WARNING OS Login not installed. Dec 13 02:09:07.445696 google-accounts[1502]: INFO Creating a new user account for 0. Dec 13 02:09:07.454514 systemd-nspawn[1449]: useradd: invalid user name '0': use --badname to ignore Dec 13 02:09:07.455248 google-accounts[1502]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 02:09:07.585577 sshd[1522]: Accepted publickey for core from 139.178.68.195 port 48298 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:09:07.587940 sshd[1522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:07.594514 systemd-logind[1329]: New session 3 of user core. Dec 13 02:09:07.595051 systemd[1]: Started session-3.scope. Dec 13 02:09:07.794068 sshd[1522]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:07.798770 systemd[1]: sshd@2-10.128.0.48:22-139.178.68.195:48298.service: Deactivated successfully. Dec 13 02:09:07.799967 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:09:07.800693 systemd-logind[1329]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:09:07.801957 systemd-logind[1329]: Removed session 3. Dec 13 02:09:07.840760 systemd[1]: Started sshd@3-10.128.0.48:22-139.178.68.195:48302.service. Dec 13 02:09:08.137854 sshd[1541]: Accepted publickey for core from 139.178.68.195 port 48302 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:09:08.139857 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:08.146416 systemd[1]: Started session-4.scope. Dec 13 02:09:08.146736 systemd-logind[1329]: New session 4 of user core. Dec 13 02:09:08.357642 sshd[1541]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:08.361809 systemd[1]: sshd@3-10.128.0.48:22-139.178.68.195:48302.service: Deactivated successfully. Dec 13 02:09:08.363045 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:09:08.365190 systemd-logind[1329]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:09:08.367010 systemd-logind[1329]: Removed session 4. Dec 13 02:09:08.401496 systemd[1]: Started sshd@4-10.128.0.48:22-139.178.68.195:48314.service. Dec 13 02:09:08.695080 sshd[1548]: Accepted publickey for core from 139.178.68.195 port 48314 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:09:08.696887 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:08.703590 systemd[1]: Started session-5.scope. Dec 13 02:09:08.704097 systemd-logind[1329]: New session 5 of user core. Dec 13 02:09:08.896538 sudo[1552]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:09:08.896991 sudo[1552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:09:08.906666 dbus-daemon[1304]: \xd0\xdd\xd8\xd7VU: received setenforce notice (enforcing=-852550208) Dec 13 02:09:08.908964 sudo[1552]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:08.954212 sshd[1548]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:08.959324 systemd[1]: sshd@4-10.128.0.48:22-139.178.68.195:48314.service: Deactivated successfully. Dec 13 02:09:08.960625 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:09:08.962277 systemd-logind[1329]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:09:08.964138 systemd-logind[1329]: Removed session 5. Dec 13 02:09:08.998708 systemd[1]: Started sshd@5-10.128.0.48:22-139.178.68.195:48318.service. Dec 13 02:09:09.291120 sshd[1556]: Accepted publickey for core from 139.178.68.195 port 48318 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:09:09.293346 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:09.299975 systemd[1]: Started session-6.scope. Dec 13 02:09:09.300299 systemd-logind[1329]: New session 6 of user core. Dec 13 02:09:09.417907 systemd[1]: Started sshd@6-10.128.0.48:22-218.92.0.190:29349.service. Dec 13 02:09:09.470554 sudo[1563]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:09:09.470969 sudo[1563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:09:09.475500 sudo[1563]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:09.488115 sudo[1562]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:09:09.488593 sudo[1562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:09:09.502953 systemd[1]: Stopping audit-rules.service... Dec 13 02:09:09.510593 kernel: kauditd_printk_skb: 160 callbacks suppressed Dec 13 02:09:09.510704 kernel: audit: type=1305 audit(1734055749.504:140): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 02:09:09.504000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 02:09:09.505850 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:09:09.510930 auditctl[1566]: No rules Dec 13 02:09:09.506196 systemd[1]: Stopped audit-rules.service. Dec 13 02:09:09.511879 systemd[1]: Starting audit-rules.service... Dec 13 02:09:09.504000 audit[1566]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd6e86bd0 a2=420 a3=0 items=0 ppid=1 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:09.557418 kernel: audit: type=1300 audit(1734055749.504:140): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd6e86bd0 a2=420 a3=0 items=0 ppid=1 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:09.557563 kernel: audit: type=1327 audit(1734055749.504:140): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 02:09:09.504000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 02:09:09.560955 augenrules[1584]: No rules Dec 13 02:09:09.562399 systemd[1]: Finished audit-rules.service. Dec 13 02:09:09.565943 sudo[1562]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:09.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.611985 kernel: audit: type=1131 audit(1734055749.505:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.612129 kernel: audit: type=1130 audit(1734055749.562:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.615289 kernel: audit: type=1106 audit(1734055749.565:143): pid=1562 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.565000 audit[1562]: USER_END pid=1562 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.615071 sshd[1556]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:09.628275 systemd[1]: sshd@5-10.128.0.48:22-139.178.68.195:48318.service: Deactivated successfully. Dec 13 02:09:09.629450 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:09:09.631428 systemd-logind[1329]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:09:09.633178 systemd-logind[1329]: Removed session 6. Dec 13 02:09:09.565000 audit[1562]: CRED_DISP pid=1562 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.638426 kernel: audit: type=1104 audit(1734055749.565:144): pid=1562 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.653638 systemd[1]: Started sshd@7-10.128.0.48:22-139.178.68.195:48322.service. Dec 13 02:09:09.616000 audit[1556]: USER_END pid=1556 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:09.696459 kernel: audit: type=1106 audit(1734055749.616:145): pid=1556 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:09.696600 kernel: audit: type=1104 audit(1734055749.616:146): pid=1556 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:09.616000 audit[1556]: CRED_DISP pid=1556 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:09.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.48:22-139.178.68.195:48318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.745521 kernel: audit: type=1131 audit(1734055749.628:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.48:22-139.178.68.195:48318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.48:22-139.178.68.195:48322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:09.944000 audit[1591]: USER_ACCT pid=1591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:09.947597 sshd[1591]: Accepted publickey for core from 139.178.68.195 port 48322 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:09:09.946000 audit[1591]: CRED_ACQ pid=1591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:09.946000 audit[1591]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff20249290 a2=3 a3=0 items=0 ppid=1 pid=1591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:09.946000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:09:09.948708 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:09.957260 systemd[1]: Started session-7.scope. Dec 13 02:09:09.957632 systemd-logind[1329]: New session 7 of user core. Dec 13 02:09:09.967000 audit[1591]: USER_START pid=1591 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:09.969000 audit[1594]: CRED_ACQ pid=1594 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:10.120000 audit[1595]: USER_ACCT pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:09:10.121683 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:09:10.120000 audit[1595]: CRED_REFR pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:09:10.122251 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:09:10.123000 audit[1595]: USER_START pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:09:10.156250 systemd[1]: Starting docker.service... Dec 13 02:09:10.201759 env[1605]: time="2024-12-13T02:09:10.200784057Z" level=info msg="Starting up" Dec 13 02:09:10.203306 env[1605]: time="2024-12-13T02:09:10.203267675Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:09:10.203306 env[1605]: time="2024-12-13T02:09:10.203302485Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:09:10.203505 env[1605]: time="2024-12-13T02:09:10.203336329Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:09:10.203505 env[1605]: time="2024-12-13T02:09:10.203354104Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:09:10.207560 env[1605]: time="2024-12-13T02:09:10.207526078Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:09:10.207560 env[1605]: time="2024-12-13T02:09:10.207561473Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:09:10.207746 env[1605]: time="2024-12-13T02:09:10.207586897Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:09:10.207746 env[1605]: time="2024-12-13T02:09:10.207605944Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:09:10.219554 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1182545466-merged.mount: Deactivated successfully. Dec 13 02:09:10.723214 env[1605]: time="2024-12-13T02:09:10.723135998Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 02:09:10.723214 env[1605]: time="2024-12-13T02:09:10.723182893Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 02:09:10.723624 env[1605]: time="2024-12-13T02:09:10.723548108Z" level=info msg="Loading containers: start." Dec 13 02:09:10.827000 audit[1635]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.827000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcefef6980 a2=0 a3=7ffcefef696c items=0 ppid=1605 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.827000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 02:09:10.831000 audit[1637]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1637 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.831000 audit[1637]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd3bb94530 a2=0 a3=7ffd3bb9451c items=0 ppid=1605 pid=1637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.831000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 02:09:10.834000 audit[1639]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.834000 audit[1639]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffb9083860 a2=0 a3=7fffb908384c items=0 ppid=1605 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.834000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 02:09:10.837000 audit[1641]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.837000 audit[1641]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe11a23dc0 a2=0 a3=7ffe11a23dac items=0 ppid=1605 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.837000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 02:09:10.842000 audit[1644]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.842000 audit[1644]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff2cd97ef0 a2=0 a3=7fff2cd97edc items=0 ppid=1605 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.842000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 02:09:10.863000 audit[1649]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.863000 audit[1649]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd96999c10 a2=0 a3=7ffd96999bfc items=0 ppid=1605 pid=1649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.863000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 02:09:10.874000 audit[1651]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.874000 audit[1651]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd3fb6d0b0 a2=0 a3=7ffd3fb6d09c items=0 ppid=1605 pid=1651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.874000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 02:09:10.878000 audit[1653]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.878000 audit[1653]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff43e36340 a2=0 a3=7fff43e3632c items=0 ppid=1605 pid=1653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.878000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 02:09:10.881000 audit[1655]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1655 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.881000 audit[1655]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffdebcacd10 a2=0 a3=7ffdebcaccfc items=0 ppid=1605 pid=1655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.881000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:09:10.896000 audit[1659]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.896000 audit[1659]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffae1b4650 a2=0 a3=7fffae1b463c items=0 ppid=1605 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.896000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:09:10.902000 audit[1660]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:10.902000 audit[1660]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdda567300 a2=0 a3=7ffdda5672ec items=0 ppid=1605 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:10.902000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:09:10.918592 kernel: Initializing XFRM netlink socket Dec 13 02:09:10.966711 env[1605]: time="2024-12-13T02:09:10.966651323Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:09:10.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:10.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:10.984570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:09:10.984845 systemd[1]: Stopped kubelet.service. Dec 13 02:09:10.988612 systemd[1]: Starting kubelet.service... Dec 13 02:09:11.023000 audit[1671]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.023000 audit[1671]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff3bf335c0 a2=0 a3=7fff3bf335ac items=0 ppid=1605 pid=1671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.023000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 02:09:11.048000 audit[1674]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1674 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.048000 audit[1674]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff7eeeb6e0 a2=0 a3=7fff7eeeb6cc items=0 ppid=1605 pid=1674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.048000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 02:09:11.054000 audit[1677]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.054000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff5f1f9780 a2=0 a3=7fff5f1f976c items=0 ppid=1605 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.054000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 02:09:11.058000 audit[1679]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.058000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe3100fa40 a2=0 a3=7ffe3100fa2c items=0 ppid=1605 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.058000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 02:09:11.063000 audit[1681]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.063000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffcc54020d0 a2=0 a3=7ffcc54020bc items=0 ppid=1605 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.063000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 02:09:11.068000 audit[1683]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.068000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd94866b40 a2=0 a3=7ffd94866b2c items=0 ppid=1605 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.068000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 02:09:11.073000 audit[1685]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.073000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd7cd87080 a2=0 a3=7ffd7cd8706c items=0 ppid=1605 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.073000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 02:09:11.088000 audit[1688]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.088000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffdff263100 a2=0 a3=7ffdff2630ec items=0 ppid=1605 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.088000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 02:09:11.093000 audit[1690]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.093000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffca12e10b0 a2=0 a3=7ffca12e109c items=0 ppid=1605 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.093000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 02:09:11.096000 audit[1692]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1692 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.096000 audit[1692]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffffed172f0 a2=0 a3=7ffffed172dc items=0 ppid=1605 pid=1692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.096000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 02:09:11.100000 audit[1694]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.100000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff37633a10 a2=0 a3=7fff376339fc items=0 ppid=1605 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.100000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 02:09:11.102695 systemd-networkd[1083]: docker0: Link UP Dec 13 02:09:11.235222 systemd[1]: Started kubelet.service. Dec 13 02:09:11.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:11.253000 audit[1709]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1709 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.253000 audit[1709]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc039c5220 a2=0 a3=7ffc039c520c items=0 ppid=1605 pid=1709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.253000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:09:11.256000 audit[1710]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1710 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:11.256000 audit[1710]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffffe1953a0 a2=0 a3=7ffffe19538c items=0 ppid=1605 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:11.256000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 02:09:11.259471 env[1605]: time="2024-12-13T02:09:11.259431302Z" level=info msg="Loading containers: done." Dec 13 02:09:11.279213 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1823615841-merged.mount: Deactivated successfully. Dec 13 02:09:11.290377 env[1605]: time="2024-12-13T02:09:11.290304984Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:09:11.290674 env[1605]: time="2024-12-13T02:09:11.290636360Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:09:11.290869 env[1605]: time="2024-12-13T02:09:11.290812399Z" level=info msg="Daemon has completed initialization" Dec 13 02:09:11.315630 systemd[1]: Started docker.service. Dec 13 02:09:11.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:11.333642 env[1605]: time="2024-12-13T02:09:11.333553932Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:09:11.360040 kubelet[1700]: E1213 02:09:11.359808 1700 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:11.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 02:09:11.365880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:11.366177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:11.366000 audit[1560]: USER_AUTH pid=1560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:09:11.368265 sshd[1560]: Failed password for root from 218.92.0.190 port 29349 ssh2 Dec 13 02:09:11.577000 audit[1560]: USER_AUTH pid=1560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:09:11.579728 sshd[1560]: Failed password for root from 218.92.0.190 port 29349 ssh2 Dec 13 02:09:11.789000 audit[1560]: USER_AUTH pid=1560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:09:11.790588 sshd[1560]: Failed password for root from 218.92.0.190 port 29349 ssh2 Dec 13 02:09:11.999325 sshd[1560]: Received disconnect from 218.92.0.190 port 29349:11: [preauth] Dec 13 02:09:11.999616 sshd[1560]: Disconnected from authenticating user root 218.92.0.190 port 29349 [preauth] Dec 13 02:09:12.001882 systemd[1]: sshd@6-10.128.0.48:22-218.92.0.190:29349.service: Deactivated successfully. Dec 13 02:09:12.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.48:22-218.92.0.190:29349 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:12.611749 env[1343]: time="2024-12-13T02:09:12.611691637Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:09:13.069189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432975597.mount: Deactivated successfully. Dec 13 02:09:15.128455 env[1343]: time="2024-12-13T02:09:15.128376167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:15.131810 env[1343]: time="2024-12-13T02:09:15.131759618Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:15.135473 env[1343]: time="2024-12-13T02:09:15.135424452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:15.140161 env[1343]: time="2024-12-13T02:09:15.140098759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:15.140744 env[1343]: time="2024-12-13T02:09:15.140685983Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:09:15.157899 env[1343]: time="2024-12-13T02:09:15.157846717Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:09:17.092588 env[1343]: time="2024-12-13T02:09:17.092496921Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:17.095779 env[1343]: time="2024-12-13T02:09:17.095721571Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:17.099118 env[1343]: time="2024-12-13T02:09:17.099077919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:17.101936 env[1343]: time="2024-12-13T02:09:17.101877936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:17.103168 env[1343]: time="2024-12-13T02:09:17.103124297Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:09:17.119206 env[1343]: time="2024-12-13T02:09:17.119148762Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:09:18.382787 env[1343]: time="2024-12-13T02:09:18.382709011Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:18.385813 env[1343]: time="2024-12-13T02:09:18.385763011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:18.388700 env[1343]: time="2024-12-13T02:09:18.388654613Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:18.391494 env[1343]: time="2024-12-13T02:09:18.391454962Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:18.392454 env[1343]: time="2024-12-13T02:09:18.392404066Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:09:18.407694 env[1343]: time="2024-12-13T02:09:18.407646344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:09:19.514015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354499271.mount: Deactivated successfully. Dec 13 02:09:20.215023 env[1343]: time="2024-12-13T02:09:20.214948088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:20.217957 env[1343]: time="2024-12-13T02:09:20.217912029Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:20.220125 env[1343]: time="2024-12-13T02:09:20.220085839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:20.221958 env[1343]: time="2024-12-13T02:09:20.221920268Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:20.222648 env[1343]: time="2024-12-13T02:09:20.222602919Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:09:20.237248 env[1343]: time="2024-12-13T02:09:20.237197545Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:09:20.721768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511812891.mount: Deactivated successfully. Dec 13 02:09:21.577843 kernel: kauditd_printk_skb: 92 callbacks suppressed Dec 13 02:09:21.577997 kernel: audit: type=1130 audit(1734055761.545:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:21.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:21.546894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:09:21.547206 systemd[1]: Stopped kubelet.service. Dec 13 02:09:21.576998 systemd[1]: Starting kubelet.service... Dec 13 02:09:21.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:21.602422 kernel: audit: type=1131 audit(1734055761.545:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:21.864064 systemd[1]: Started kubelet.service. Dec 13 02:09:21.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:21.887420 kernel: audit: type=1130 audit(1734055761.863:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:21.985409 kubelet[1780]: E1213 02:09:21.985322 1780 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:21.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 02:09:21.988805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:21.989082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:22.011629 kernel: audit: type=1131 audit(1734055761.988:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 02:09:22.091976 env[1343]: time="2024-12-13T02:09:22.091897982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:22.095409 env[1343]: time="2024-12-13T02:09:22.095335828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:22.098064 env[1343]: time="2024-12-13T02:09:22.098022231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:22.100577 env[1343]: time="2024-12-13T02:09:22.100533097Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:22.101725 env[1343]: time="2024-12-13T02:09:22.101647809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:09:22.117186 env[1343]: time="2024-12-13T02:09:22.117031106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:09:22.527763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2896514933.mount: Deactivated successfully. Dec 13 02:09:22.537102 env[1343]: time="2024-12-13T02:09:22.537038783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:22.539722 env[1343]: time="2024-12-13T02:09:22.539675043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:22.542102 env[1343]: time="2024-12-13T02:09:22.542057609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:22.544406 env[1343]: time="2024-12-13T02:09:22.544346721Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:22.545199 env[1343]: time="2024-12-13T02:09:22.545147057Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:09:22.560265 env[1343]: time="2024-12-13T02:09:22.560200194Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:09:23.019447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829937573.mount: Deactivated successfully. Dec 13 02:09:26.693135 env[1343]: time="2024-12-13T02:09:26.693059325Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:26.696258 env[1343]: time="2024-12-13T02:09:26.696212462Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:26.698995 env[1343]: time="2024-12-13T02:09:26.698951292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:26.701476 env[1343]: time="2024-12-13T02:09:26.701434666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:26.702569 env[1343]: time="2024-12-13T02:09:26.702527213Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:09:28.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:28.172719 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:09:28.196428 kernel: audit: type=1131 audit(1734055768.171:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:31.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:31.701499 systemd[1]: Stopped kubelet.service. Dec 13 02:09:31.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:31.725004 systemd[1]: Starting kubelet.service... Dec 13 02:09:31.744864 kernel: audit: type=1130 audit(1734055771.700:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:31.745027 kernel: audit: type=1131 audit(1734055771.702:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:31.768430 systemd[1]: Reloading. Dec 13 02:09:31.906585 /usr/lib/systemd/system-generators/torcx-generator[1889]: time="2024-12-13T02:09:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:09:31.906633 /usr/lib/systemd/system-generators/torcx-generator[1889]: time="2024-12-13T02:09:31Z" level=info msg="torcx already run" Dec 13 02:09:32.049237 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:09:32.049266 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:09:32.073399 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:09:32.200911 systemd[1]: Started kubelet.service. Dec 13 02:09:32.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:32.205765 systemd[1]: Stopping kubelet.service... Dec 13 02:09:32.225411 kernel: audit: type=1130 audit(1734055772.200:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:32.226769 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:09:32.227160 systemd[1]: Stopped kubelet.service. Dec 13 02:09:32.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:32.235238 systemd[1]: Starting kubelet.service... Dec 13 02:09:32.249688 kernel: audit: type=1131 audit(1734055772.225:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:32.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:32.447420 systemd[1]: Started kubelet.service. Dec 13 02:09:32.471372 kernel: audit: type=1130 audit(1734055772.446:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:32.537452 kubelet[1955]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:09:32.537915 kubelet[1955]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:09:32.537983 kubelet[1955]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:09:32.540024 kubelet[1955]: I1213 02:09:32.539968 1955 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:09:33.248598 kubelet[1955]: I1213 02:09:33.248547 1955 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:09:33.248598 kubelet[1955]: I1213 02:09:33.248583 1955 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:09:33.248950 kubelet[1955]: I1213 02:09:33.248911 1955 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:09:33.304038 kubelet[1955]: E1213 02:09:33.303978 1955 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:33.305870 kubelet[1955]: I1213 02:09:33.305835 1955 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:09:33.322458 kubelet[1955]: I1213 02:09:33.322417 1955 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:09:33.323211 kubelet[1955]: I1213 02:09:33.323156 1955 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:09:33.323505 kubelet[1955]: I1213 02:09:33.323467 1955 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:09:33.324492 kubelet[1955]: I1213 02:09:33.324457 1955 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:09:33.324492 kubelet[1955]: I1213 02:09:33.324493 1955 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:09:33.327422 kubelet[1955]: I1213 02:09:33.327365 1955 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:09:33.327591 kubelet[1955]: I1213 02:09:33.327555 1955 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:09:33.327591 kubelet[1955]: I1213 02:09:33.327587 1955 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:09:33.327727 kubelet[1955]: I1213 02:09:33.327627 1955 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:09:33.327727 kubelet[1955]: I1213 02:09:33.327653 1955 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:09:33.330327 kubelet[1955]: W1213 02:09:33.330265 1955 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:33.330459 kubelet[1955]: E1213 02:09:33.330353 1955 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:33.331725 kubelet[1955]: I1213 02:09:33.331700 1955 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:09:33.337090 kubelet[1955]: I1213 02:09:33.336985 1955 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:09:33.346023 kubelet[1955]: W1213 02:09:33.345988 1955 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:09:33.347865 kubelet[1955]: W1213 02:09:33.346150 1955 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:33.347990 kubelet[1955]: E1213 02:09:33.347929 1955 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:33.348526 kubelet[1955]: I1213 02:09:33.348506 1955 server.go:1256] "Started kubelet" Dec 13 02:09:33.348000 audit[1955]: AVC avc: denied { mac_admin } for pid=1955 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:33.350031 kubelet[1955]: I1213 02:09:33.350010 1955 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 02:09:33.350205 kubelet[1955]: I1213 02:09:33.350186 1955 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 02:09:33.350417 kubelet[1955]: I1213 02:09:33.350401 1955 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:09:33.358949 kubelet[1955]: I1213 02:09:33.358916 1955 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:09:33.360401 kubelet[1955]: I1213 02:09:33.360363 1955 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:09:33.362087 kubelet[1955]: I1213 02:09:33.362063 1955 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:09:33.362518 kubelet[1955]: I1213 02:09:33.362499 1955 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:09:33.365406 kubelet[1955]: I1213 02:09:33.365372 1955 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:09:33.370010 kubelet[1955]: I1213 02:09:33.369981 1955 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:09:33.370256 kubelet[1955]: I1213 02:09:33.370241 1955 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:09:33.371413 kernel: audit: type=1400 audit(1734055773.348:200): avc: denied { mac_admin } for pid=1955 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:33.348000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:33.378195 kubelet[1955]: E1213 02:09:33.378171 1955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="200ms" Dec 13 02:09:33.382865 kubelet[1955]: E1213 02:09:33.382835 1955 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal.18109a89d726b44c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,UID:ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 02:09:33.34847598 +0000 UTC m=+0.886547226,LastTimestamp:2024-12-13 02:09:33.34847598 +0000 UTC m=+0.886547226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,}" Dec 13 02:09:33.383422 kernel: audit: type=1401 audit(1734055773.348:200): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:33.348000 audit[1955]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c6b170 a1=c000c74a08 a2=c000c6b140 a3=25 items=0 ppid=1 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.419320 kernel: audit: type=1300 audit(1734055773.348:200): arch=c000003e syscall=188 success=no exit=-22 a0=c000c6b170 a1=c000c74a08 a2=c000c6b140 a3=25 items=0 ppid=1 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.419490 kernel: audit: type=1327 audit(1734055773.348:200): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:33.348000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:33.419620 kubelet[1955]: W1213 02:09:33.416273 1955 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:33.419620 kubelet[1955]: E1213 02:09:33.416359 1955 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:33.445876 kubelet[1955]: I1213 02:09:33.445837 1955 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:09:33.445876 kubelet[1955]: I1213 02:09:33.445871 1955 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:09:33.446125 kubelet[1955]: I1213 02:09:33.446018 1955 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:09:33.449777 kubelet[1955]: E1213 02:09:33.449752 1955 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:09:33.454402 kubelet[1955]: I1213 02:09:33.454367 1955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:09:33.348000 audit[1955]: AVC avc: denied { mac_admin } for pid=1955 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:33.461374 kubelet[1955]: I1213 02:09:33.461346 1955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:09:33.461583 kubelet[1955]: I1213 02:09:33.461566 1955 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:09:33.461720 kubelet[1955]: I1213 02:09:33.461706 1955 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:09:33.461929 kubelet[1955]: E1213 02:09:33.461915 1955 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:09:33.466526 kubelet[1955]: W1213 02:09:33.466472 1955 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:33.466725 kubelet[1955]: E1213 02:09:33.466708 1955 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:33.348000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:33.487208 kubelet[1955]: I1213 02:09:33.487181 1955 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.488228 kernel: audit: type=1400 audit(1734055773.348:201): avc: denied { mac_admin } for pid=1955 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:33.488313 kernel: audit: type=1401 audit(1734055773.348:201): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:33.488366 kernel: audit: type=1300 audit(1734055773.348:201): arch=c000003e syscall=188 success=no exit=-22 a0=c00033ee60 a1=c000c74a20 a2=c000c6b200 a3=25 items=0 ppid=1 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.348000 audit[1955]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00033ee60 a1=c000c74a20 a2=c000c6b200 a3=25 items=0 ppid=1 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.488918 kubelet[1955]: E1213 02:09:33.488898 1955 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.521432 kernel: audit: type=1327 audit(1734055773.348:201): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:33.348000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:33.527426 kubelet[1955]: I1213 02:09:33.527398 1955 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:09:33.527627 kubelet[1955]: I1213 02:09:33.527612 1955 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:09:33.527772 kubelet[1955]: I1213 02:09:33.527750 1955 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:09:33.353000 audit[1965]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:33.550947 kubelet[1955]: I1213 02:09:33.550911 1955 policy_none.go:49] "None policy: Start" Dec 13 02:09:33.552674 kubelet[1955]: I1213 02:09:33.552653 1955 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:09:33.552853 kubelet[1955]: I1213 02:09:33.552839 1955 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:09:33.353000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdaa81ce70 a2=0 a3=7ffdaa81ce5c items=0 ppid=1955 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.564506 kubelet[1955]: E1213 02:09:33.564477 1955 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:09:33.596293 kernel: audit: type=1325 audit(1734055773.353:202): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:33.596455 kernel: audit: type=1300 audit(1734055773.353:202): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdaa81ce70 a2=0 a3=7ffdaa81ce5c items=0 ppid=1955 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 02:09:33.355000 audit[1966]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:33.355000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe01459d90 a2=0 a3=7ffe01459d7c items=0 ppid=1955 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 02:09:33.366000 audit[1968]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:33.366000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff69ac2ec0 a2=0 a3=7fff69ac2eac items=0 ppid=1955 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 02:09:33.370000 audit[1970]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:33.370000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff369402e0 a2=0 a3=7fff369402cc items=0 ppid=1955 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 02:09:33.452000 audit[1975]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:33.452000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff0dc13dc0 a2=0 a3=7fff0dc13dac items=0 ppid=1955 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 02:09:33.455000 audit[1977]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:33.455000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd5c37fbe0 a2=0 a3=7ffd5c37fbcc items=0 ppid=1955 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 02:09:33.463000 audit[1978]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:33.463000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee7294160 a2=0 a3=7ffee729414c items=0 ppid=1955 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.463000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 02:09:33.468000 audit[1980]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:33.468000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6dc7e7c0 a2=0 a3=7ffd6dc7e7ac items=0 ppid=1955 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 02:09:33.473000 audit[1984]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:33.473000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff4903a950 a2=0 a3=7fff4903a93c items=0 ppid=1955 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 02:09:33.474000 audit[1985]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:33.474000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedbf40d90 a2=0 a3=7ffedbf40d7c items=0 ppid=1955 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 02:09:33.490000 audit[1986]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:33.490000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde28f2760 a2=0 a3=7ffde28f274c items=0 ppid=1955 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 02:09:33.495000 audit[1987]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:33.495000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb6687c70 a2=0 a3=7ffdb6687c5c items=0 ppid=1955 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 02:09:33.599731 kubelet[1955]: E1213 02:09:33.599690 1955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="400ms" Dec 13 02:09:33.601127 kubelet[1955]: I1213 02:09:33.601098 1955 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:09:33.599000 audit[1955]: AVC avc: denied { mac_admin } for pid=1955 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:33.599000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:33.599000 audit[1955]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001034ea0 a1=c001028888 a2=c001034e70 a3=25 items=0 ppid=1 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:33.599000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:33.601615 kubelet[1955]: I1213 02:09:33.601219 1955 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 02:09:33.603664 kubelet[1955]: I1213 02:09:33.603636 1955 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:09:33.607429 kubelet[1955]: E1213 02:09:33.607372 1955 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" not found" Dec 13 02:09:33.696868 kubelet[1955]: I1213 02:09:33.696833 1955 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.697375 kubelet[1955]: E1213 02:09:33.697312 1955 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.765900 kubelet[1955]: I1213 02:09:33.765833 1955 topology_manager.go:215] "Topology Admit Handler" podUID="1df31bfd3d296ffa55602a242959cf15" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.776261 kubelet[1955]: I1213 02:09:33.776128 1955 topology_manager.go:215] "Topology Admit Handler" podUID="a9f8f6d3c509fb82120e4d0a0b5db2c7" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.783148 kubelet[1955]: I1213 02:09:33.783112 1955 topology_manager.go:215] "Topology Admit Handler" podUID="0a4336262835533d0452fec36abbcc70" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.800932 kubelet[1955]: I1213 02:09:33.800894 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1df31bfd3d296ffa55602a242959cf15-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"1df31bfd3d296ffa55602a242959cf15\") " pod="kube-system/kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.801191 kubelet[1955]: I1213 02:09:33.800959 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1df31bfd3d296ffa55602a242959cf15-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"1df31bfd3d296ffa55602a242959cf15\") " pod="kube-system/kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.801191 kubelet[1955]: I1213 02:09:33.800999 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1df31bfd3d296ffa55602a242959cf15-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"1df31bfd3d296ffa55602a242959cf15\") " pod="kube-system/kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.901530 kubelet[1955]: I1213 02:09:33.901459 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.901530 kubelet[1955]: I1213 02:09:33.901530 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.901820 kubelet[1955]: I1213 02:09:33.901568 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.901820 kubelet[1955]: I1213 02:09:33.901601 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a4336262835533d0452fec36abbcc70-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"0a4336262835533d0452fec36abbcc70\") " pod="kube-system/kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.901820 kubelet[1955]: I1213 02:09:33.901684 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:33.901820 kubelet[1955]: I1213 02:09:33.901717 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:34.001140 kubelet[1955]: E1213 02:09:34.001091 1955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="800ms" Dec 13 02:09:34.091695 env[1343]: time="2024-12-13T02:09:34.091636782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,Uid:1df31bfd3d296ffa55602a242959cf15,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:34.099710 env[1343]: time="2024-12-13T02:09:34.099653941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,Uid:a9f8f6d3c509fb82120e4d0a0b5db2c7,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:34.110127 kubelet[1955]: I1213 02:09:34.110088 1955 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:34.110965 kubelet[1955]: E1213 02:09:34.110938 1955 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:34.112895 env[1343]: time="2024-12-13T02:09:34.112840284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,Uid:0a4336262835533d0452fec36abbcc70,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:34.268731 kubelet[1955]: W1213 02:09:34.268568 1955 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:34.268731 kubelet[1955]: E1213 02:09:34.268696 1955 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:34.486770 kubelet[1955]: W1213 02:09:34.486312 1955 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:34.486770 kubelet[1955]: E1213 02:09:34.486454 1955 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:34.520053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463725361.mount: Deactivated successfully. Dec 13 02:09:34.531421 env[1343]: time="2024-12-13T02:09:34.531348783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.532655 env[1343]: time="2024-12-13T02:09:34.532598911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.536133 env[1343]: time="2024-12-13T02:09:34.536094318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.537809 env[1343]: time="2024-12-13T02:09:34.537749386Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.540509 env[1343]: time="2024-12-13T02:09:34.540472738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.543095 env[1343]: time="2024-12-13T02:09:34.543052480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.546006 env[1343]: time="2024-12-13T02:09:34.545954510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.549634 env[1343]: time="2024-12-13T02:09:34.549593508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.554563 env[1343]: time="2024-12-13T02:09:34.554516929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.557054 env[1343]: time="2024-12-13T02:09:34.557012199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.558332 env[1343]: time="2024-12-13T02:09:34.558292310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.559717 env[1343]: time="2024-12-13T02:09:34.559681366Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:34.588273 env[1343]: time="2024-12-13T02:09:34.588190050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:34.588580 env[1343]: time="2024-12-13T02:09:34.588534410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:34.588686 env[1343]: time="2024-12-13T02:09:34.588599226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:34.590802 env[1343]: time="2024-12-13T02:09:34.590734763Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e30711e89904575e71a43115888a8ddce1a9f2197c7ef521cd5d47ee4f46f4b pid=1996 runtime=io.containerd.runc.v2 Dec 13 02:09:34.635416 env[1343]: time="2024-12-13T02:09:34.635289988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:34.635682 env[1343]: time="2024-12-13T02:09:34.635643451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:34.635845 env[1343]: time="2024-12-13T02:09:34.635810863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:34.636520 env[1343]: time="2024-12-13T02:09:34.636463876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c2b842e6f06c7f980a38500b29010a6c527a7e9e2c83ab0b22749e69ef5fec3 pid=2030 runtime=io.containerd.runc.v2 Dec 13 02:09:34.657932 env[1343]: time="2024-12-13T02:09:34.657655986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:34.657932 env[1343]: time="2024-12-13T02:09:34.657711808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:34.657932 env[1343]: time="2024-12-13T02:09:34.657732688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:34.658249 env[1343]: time="2024-12-13T02:09:34.657980286Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9cc182abfb170dd7e9c6058096f4926c6446adb3bd482d3fcca0da26194a020 pid=2051 runtime=io.containerd.runc.v2 Dec 13 02:09:34.703559 kubelet[1955]: W1213 02:09:34.698625 1955 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:34.703559 kubelet[1955]: E1213 02:09:34.698776 1955 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:34.751093 env[1343]: time="2024-12-13T02:09:34.750940581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,Uid:a9f8f6d3c509fb82120e4d0a0b5db2c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e30711e89904575e71a43115888a8ddce1a9f2197c7ef521cd5d47ee4f46f4b\"" Dec 13 02:09:34.755046 kubelet[1955]: E1213 02:09:34.755012 1955 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flat" Dec 13 02:09:34.757898 env[1343]: time="2024-12-13T02:09:34.757847123Z" level=info msg="CreateContainer within sandbox \"3e30711e89904575e71a43115888a8ddce1a9f2197c7ef521cd5d47ee4f46f4b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:09:34.789419 env[1343]: time="2024-12-13T02:09:34.789300792Z" level=info msg="CreateContainer within sandbox \"3e30711e89904575e71a43115888a8ddce1a9f2197c7ef521cd5d47ee4f46f4b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d2329d93050b0c0f137a50e84124465591b0c6bc1e093ca3612af5af959bea15\"" Dec 13 02:09:34.791419 env[1343]: time="2024-12-13T02:09:34.790280921Z" level=info msg="StartContainer for \"d2329d93050b0c0f137a50e84124465591b0c6bc1e093ca3612af5af959bea15\"" Dec 13 02:09:34.804508 kubelet[1955]: E1213 02:09:34.802537 1955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="1.6s" Dec 13 02:09:34.806868 env[1343]: time="2024-12-13T02:09:34.806819995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,Uid:1df31bfd3d296ffa55602a242959cf15,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c2b842e6f06c7f980a38500b29010a6c527a7e9e2c83ab0b22749e69ef5fec3\"" Dec 13 02:09:34.810653 kubelet[1955]: E1213 02:09:34.810057 1955 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-21291" Dec 13 02:09:34.813078 env[1343]: time="2024-12-13T02:09:34.813025417Z" level=info msg="CreateContainer within sandbox \"3c2b842e6f06c7f980a38500b29010a6c527a7e9e2c83ab0b22749e69ef5fec3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:09:34.849429 env[1343]: time="2024-12-13T02:09:34.845582781Z" level=info msg="CreateContainer within sandbox \"3c2b842e6f06c7f980a38500b29010a6c527a7e9e2c83ab0b22749e69ef5fec3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f6ae49180bbcbd2758b4ac38cd67fce95a2504b73815374702052002168b5ea4\"" Dec 13 02:09:34.849429 env[1343]: time="2024-12-13T02:09:34.847916315Z" level=info msg="StartContainer for \"f6ae49180bbcbd2758b4ac38cd67fce95a2504b73815374702052002168b5ea4\"" Dec 13 02:09:34.853424 env[1343]: time="2024-12-13T02:09:34.851714734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal,Uid:0a4336262835533d0452fec36abbcc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9cc182abfb170dd7e9c6058096f4926c6446adb3bd482d3fcca0da26194a020\"" Dec 13 02:09:34.854442 kubelet[1955]: E1213 02:09:34.853721 1955 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-21291" Dec 13 02:09:34.855952 env[1343]: time="2024-12-13T02:09:34.855895316Z" level=info msg="CreateContainer within sandbox \"b9cc182abfb170dd7e9c6058096f4926c6446adb3bd482d3fcca0da26194a020\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:09:34.881597 env[1343]: time="2024-12-13T02:09:34.881532571Z" level=info msg="CreateContainer within sandbox \"b9cc182abfb170dd7e9c6058096f4926c6446adb3bd482d3fcca0da26194a020\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f5feb9b280c1d8d250faa66b98251c2796279b078eed927851684556035f318\"" Dec 13 02:09:34.886782 env[1343]: time="2024-12-13T02:09:34.886718298Z" level=info msg="StartContainer for \"3f5feb9b280c1d8d250faa66b98251c2796279b078eed927851684556035f318\"" Dec 13 02:09:34.927205 kubelet[1955]: I1213 02:09:34.920459 1955 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:34.927806 kubelet[1955]: E1213 02:09:34.927769 1955 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:34.950083 kubelet[1955]: W1213 02:09:34.949927 1955 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:34.950083 kubelet[1955]: E1213 02:09:34.950017 1955 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Dec 13 02:09:34.965087 env[1343]: time="2024-12-13T02:09:34.965032571Z" level=info msg="StartContainer for \"d2329d93050b0c0f137a50e84124465591b0c6bc1e093ca3612af5af959bea15\" returns successfully" Dec 13 02:09:35.017186 env[1343]: time="2024-12-13T02:09:35.017053612Z" level=info msg="StartContainer for \"f6ae49180bbcbd2758b4ac38cd67fce95a2504b73815374702052002168b5ea4\" returns successfully" Dec 13 02:09:35.146470 env[1343]: time="2024-12-13T02:09:35.146406012Z" level=info msg="StartContainer for \"3f5feb9b280c1d8d250faa66b98251c2796279b078eed927851684556035f318\" returns successfully" Dec 13 02:09:36.543043 kubelet[1955]: I1213 02:09:36.542992 1955 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:38.039915 kubelet[1955]: E1213 02:09:38.039858 1955 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" not found" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:38.076568 kubelet[1955]: I1213 02:09:38.076528 1955 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:38.333134 kubelet[1955]: I1213 02:09:38.333094 1955 apiserver.go:52] "Watching apiserver" Dec 13 02:09:38.371208 kubelet[1955]: I1213 02:09:38.371162 1955 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:09:40.603896 kubelet[1955]: W1213 02:09:40.603853 1955 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:09:41.127695 systemd[1]: Reloading. Dec 13 02:09:41.247432 /usr/lib/systemd/system-generators/torcx-generator[2247]: time="2024-12-13T02:09:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:09:41.247479 /usr/lib/systemd/system-generators/torcx-generator[2247]: time="2024-12-13T02:09:41Z" level=info msg="torcx already run" Dec 13 02:09:41.368032 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:09:41.368063 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:09:41.393904 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:09:41.539104 kubelet[1955]: I1213 02:09:41.539041 1955 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:09:41.540769 systemd[1]: Stopping kubelet.service... Dec 13 02:09:41.557179 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:09:41.557708 systemd[1]: Stopped kubelet.service. Dec 13 02:09:41.563444 kernel: kauditd_printk_skb: 38 callbacks suppressed Dec 13 02:09:41.563573 kernel: audit: type=1131 audit(1734055781.557:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:41.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:41.561530 systemd[1]: Starting kubelet.service... Dec 13 02:09:41.801896 systemd[1]: Started kubelet.service. Dec 13 02:09:41.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:41.834631 kernel: audit: type=1130 audit(1734055781.806:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:41.956087 kubelet[2306]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:09:41.956087 kubelet[2306]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:09:41.956087 kubelet[2306]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:09:41.956708 kubelet[2306]: I1213 02:09:41.956259 2306 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:09:41.974089 kubelet[2306]: I1213 02:09:41.974057 2306 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:09:41.974265 kubelet[2306]: I1213 02:09:41.974252 2306 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:09:41.976937 kubelet[2306]: I1213 02:09:41.976864 2306 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:09:41.982222 kubelet[2306]: I1213 02:09:41.982193 2306 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:09:41.986125 kubelet[2306]: I1213 02:09:41.986088 2306 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:09:42.003291 kubelet[2306]: I1213 02:09:42.002053 2306 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:09:42.003291 kubelet[2306]: I1213 02:09:42.002939 2306 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:09:42.003291 kubelet[2306]: I1213 02:09:42.003216 2306 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:09:42.005186 kubelet[2306]: I1213 02:09:42.003249 2306 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:09:42.005186 kubelet[2306]: I1213 02:09:42.004797 2306 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:09:42.005186 kubelet[2306]: I1213 02:09:42.004863 2306 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:09:42.005186 kubelet[2306]: I1213 02:09:42.005044 2306 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:09:42.005186 kubelet[2306]: I1213 02:09:42.005065 2306 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:09:42.005803 kubelet[2306]: I1213 02:09:42.005778 2306 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:09:42.007184 kubelet[2306]: I1213 02:09:42.005817 2306 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:09:42.009245 kubelet[2306]: I1213 02:09:42.009223 2306 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:09:42.012109 kubelet[2306]: I1213 02:09:42.012080 2306 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:09:42.014338 kubelet[2306]: I1213 02:09:42.013652 2306 server.go:1256] "Started kubelet" Dec 13 02:09:42.020000 audit[2306]: AVC avc: denied { mac_admin } for pid=2306 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:42.038632 kubelet[2306]: I1213 02:09:42.020498 2306 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 02:09:42.038632 kubelet[2306]: I1213 02:09:42.020582 2306 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 02:09:42.038632 kubelet[2306]: I1213 02:09:42.020631 2306 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:09:42.039036 kubelet[2306]: I1213 02:09:42.039010 2306 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:09:42.042426 kernel: audit: type=1400 audit(1734055782.020:217): avc: denied { mac_admin } for pid=2306 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:42.020000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:42.050994 kubelet[2306]: I1213 02:09:42.050957 2306 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:09:42.051595 kubelet[2306]: I1213 02:09:42.051574 2306 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:09:42.054408 kernel: audit: type=1401 audit(1734055782.020:217): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:42.020000 audit[2306]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c62c60 a1=c0009874e8 a2=c000c62c30 a3=25 items=0 ppid=1 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:42.062328 kubelet[2306]: I1213 02:09:42.062294 2306 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:09:42.088745 kernel: audit: type=1300 audit(1734055782.020:217): arch=c000003e syscall=188 success=no exit=-22 a0=c000c62c60 a1=c0009874e8 a2=c000c62c30 a3=25 items=0 ppid=1 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:42.020000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:42.091111 kubelet[2306]: I1213 02:09:42.091084 2306 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:09:42.093788 kubelet[2306]: I1213 02:09:42.093737 2306 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:09:42.094500 kubelet[2306]: I1213 02:09:42.094470 2306 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:09:42.121432 kernel: audit: type=1327 audit(1734055782.020:217): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:42.123534 kubelet[2306]: I1213 02:09:42.123500 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:09:42.124304 kubelet[2306]: I1213 02:09:42.124274 2306 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:09:42.020000 audit[2306]: AVC avc: denied { mac_admin } for pid=2306 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:42.148237 kubelet[2306]: I1213 02:09:42.129513 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:09:42.148237 kubelet[2306]: I1213 02:09:42.129547 2306 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:09:42.148237 kubelet[2306]: I1213 02:09:42.129574 2306 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:09:42.148237 kubelet[2306]: E1213 02:09:42.129643 2306 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:09:42.148237 kubelet[2306]: I1213 02:09:42.139365 2306 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:09:42.148237 kubelet[2306]: I1213 02:09:42.139402 2306 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:09:42.148693 kernel: audit: type=1400 audit(1734055782.020:218): avc: denied { mac_admin } for pid=2306 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:42.020000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:42.020000 audit[2306]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c0eb20 a1=c000987500 a2=c000c62cf0 a3=25 items=0 ppid=1 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:42.192604 kernel: audit: type=1401 audit(1734055782.020:218): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:42.192784 kernel: audit: type=1300 audit(1734055782.020:218): arch=c000003e syscall=188 success=no exit=-22 a0=c000c0eb20 a1=c000987500 a2=c000c62cf0 a3=25 items=0 ppid=1 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:42.201363 kernel: audit: type=1327 audit(1734055782.020:218): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:42.020000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:42.201608 kubelet[2306]: E1213 02:09:42.197508 2306 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Dec 13 02:09:42.203890 kubelet[2306]: I1213 02:09:42.203846 2306 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.225409 kubelet[2306]: I1213 02:09:42.224134 2306 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.225409 kubelet[2306]: I1213 02:09:42.224233 2306 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.229815 kubelet[2306]: E1213 02:09:42.229770 2306 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:09:42.314353 kubelet[2306]: I1213 02:09:42.314233 2306 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:09:42.314855 kubelet[2306]: I1213 02:09:42.314837 2306 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:09:42.315037 kubelet[2306]: I1213 02:09:42.315021 2306 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:09:42.315516 kubelet[2306]: I1213 02:09:42.315490 2306 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:09:42.315680 kubelet[2306]: I1213 02:09:42.315663 2306 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:09:42.315851 kubelet[2306]: I1213 02:09:42.315835 2306 policy_none.go:49] "None policy: Start" Dec 13 02:09:42.317256 kubelet[2306]: I1213 02:09:42.317236 2306 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:09:42.317489 kubelet[2306]: I1213 02:09:42.317455 2306 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:09:42.317851 kubelet[2306]: I1213 02:09:42.317833 2306 state_mem.go:75] "Updated machine memory state" Dec 13 02:09:42.320771 kubelet[2306]: I1213 02:09:42.320749 2306 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:09:42.320000 audit[2306]: AVC avc: denied { mac_admin } for pid=2306 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:09:42.320000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 02:09:42.320000 audit[2306]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e3ddd0 a1=c000e42b88 a2=c000e3dda0 a3=25 items=0 ppid=1 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:42.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 02:09:42.321669 kubelet[2306]: I1213 02:09:42.321651 2306 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 02:09:42.324138 kubelet[2306]: I1213 02:09:42.324115 2306 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:09:42.431199 kubelet[2306]: I1213 02:09:42.430233 2306 topology_manager.go:215] "Topology Admit Handler" podUID="1df31bfd3d296ffa55602a242959cf15" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.431199 kubelet[2306]: I1213 02:09:42.430377 2306 topology_manager.go:215] "Topology Admit Handler" podUID="a9f8f6d3c509fb82120e4d0a0b5db2c7" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.431199 kubelet[2306]: I1213 02:09:42.430466 2306 topology_manager.go:215] "Topology Admit Handler" podUID="0a4336262835533d0452fec36abbcc70" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.439718 kubelet[2306]: W1213 02:09:42.439684 2306 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:09:42.441464 kubelet[2306]: W1213 02:09:42.441439 2306 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:09:42.442556 kubelet[2306]: W1213 02:09:42.442528 2306 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:09:42.442721 kubelet[2306]: E1213 02:09:42.442664 2306 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.496371 kubelet[2306]: I1213 02:09:42.496317 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.496800 kubelet[2306]: I1213 02:09:42.496762 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.497060 kubelet[2306]: I1213 02:09:42.497036 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a4336262835533d0452fec36abbcc70-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"0a4336262835533d0452fec36abbcc70\") " pod="kube-system/kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.497181 kubelet[2306]: I1213 02:09:42.497100 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1df31bfd3d296ffa55602a242959cf15-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"1df31bfd3d296ffa55602a242959cf15\") " pod="kube-system/kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.497181 kubelet[2306]: I1213 02:09:42.497144 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1df31bfd3d296ffa55602a242959cf15-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"1df31bfd3d296ffa55602a242959cf15\") " pod="kube-system/kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.497181 kubelet[2306]: I1213 02:09:42.497181 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.497360 kubelet[2306]: I1213 02:09:42.497216 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1df31bfd3d296ffa55602a242959cf15-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"1df31bfd3d296ffa55602a242959cf15\") " pod="kube-system/kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.497360 kubelet[2306]: I1213 02:09:42.497256 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.497360 kubelet[2306]: I1213 02:09:42.497312 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9f8f6d3c509fb82120e4d0a0b5db2c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal\" (UID: \"a9f8f6d3c509fb82120e4d0a0b5db2c7\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:09:42.771123 update_engine[1332]: I1213 02:09:42.770442 1332 update_attempter.cc:509] Updating boot flags... Dec 13 02:09:43.014861 kubelet[2306]: I1213 02:09:43.014495 2306 apiserver.go:52] "Watching apiserver" Dec 13 02:09:43.097850 kubelet[2306]: I1213 02:09:43.095184 2306 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:09:43.308857 kubelet[2306]: I1213 02:09:43.308707 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" podStartSLOduration=1.3085590790000001 podStartE2EDuration="1.308559079s" podCreationTimestamp="2024-12-13 02:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:09:43.308178906 +0000 UTC m=+1.472473403" watchObservedRunningTime="2024-12-13 02:09:43.308559079 +0000 UTC m=+1.472853566" Dec 13 02:09:43.310844 kubelet[2306]: I1213 02:09:43.310715 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" podStartSLOduration=1.310667489 podStartE2EDuration="1.310667489s" podCreationTimestamp="2024-12-13 02:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:09:43.290631947 +0000 UTC m=+1.454926445" watchObservedRunningTime="2024-12-13 02:09:43.310667489 +0000 UTC m=+1.474961975" Dec 13 02:09:43.350522 kubelet[2306]: I1213 02:09:43.350332 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" podStartSLOduration=3.3502715849999998 podStartE2EDuration="3.350271585s" podCreationTimestamp="2024-12-13 02:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:09:43.328454072 +0000 UTC m=+1.492748566" watchObservedRunningTime="2024-12-13 02:09:43.350271585 +0000 UTC m=+1.514566073" Dec 13 02:09:46.205543 sudo[1595]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:46.204000 audit[1595]: USER_END pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:09:46.204000 audit[1595]: CRED_DISP pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 02:09:46.248118 sshd[1591]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:46.250000 audit[1591]: USER_END pid=1591 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:46.250000 audit[1591]: CRED_DISP pid=1591 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:09:46.255221 systemd[1]: sshd@7-10.128.0.48:22-139.178.68.195:48322.service: Deactivated successfully. Dec 13 02:09:46.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.48:22-139.178.68.195:48322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:09:46.256474 systemd-logind[1329]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:09:46.257287 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:09:46.258354 systemd-logind[1329]: Removed session 7. Dec 13 02:09:54.974115 kubelet[2306]: I1213 02:09:54.974054 2306 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:09:54.974858 env[1343]: time="2024-12-13T02:09:54.974805878Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:09:54.975403 kubelet[2306]: I1213 02:09:54.975179 2306 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:09:55.925324 kubelet[2306]: I1213 02:09:55.925260 2306 topology_manager.go:215] "Topology Admit Handler" podUID="b62c1eb0-ba37-4a26-9514-e08f7b9e4a54" podNamespace="kube-system" podName="kube-proxy-gwx4w" Dec 13 02:09:55.932902 kubelet[2306]: W1213 02:09:55.932855 2306 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' and this object Dec 13 02:09:55.933176 kubelet[2306]: E1213 02:09:55.933142 2306 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' and this object Dec 13 02:09:55.933539 kubelet[2306]: W1213 02:09:55.933495 2306 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' and this object Dec 13 02:09:55.933725 kubelet[2306]: E1213 02:09:55.933706 2306 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' and this object Dec 13 02:09:56.058928 kubelet[2306]: I1213 02:09:56.058857 2306 topology_manager.go:215] "Topology Admit Handler" podUID="05b858c4-4812-4464-bf85-785098316a12" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-ws4s9" Dec 13 02:09:56.087245 kubelet[2306]: I1213 02:09:56.087184 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b62c1eb0-ba37-4a26-9514-e08f7b9e4a54-kube-proxy\") pod \"kube-proxy-gwx4w\" (UID: \"b62c1eb0-ba37-4a26-9514-e08f7b9e4a54\") " pod="kube-system/kube-proxy-gwx4w" Dec 13 02:09:56.087245 kubelet[2306]: I1213 02:09:56.087250 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b62c1eb0-ba37-4a26-9514-e08f7b9e4a54-xtables-lock\") pod \"kube-proxy-gwx4w\" (UID: \"b62c1eb0-ba37-4a26-9514-e08f7b9e4a54\") " pod="kube-system/kube-proxy-gwx4w" Dec 13 02:09:56.087564 kubelet[2306]: I1213 02:09:56.087286 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndw7z\" (UniqueName: \"kubernetes.io/projected/b62c1eb0-ba37-4a26-9514-e08f7b9e4a54-kube-api-access-ndw7z\") pod \"kube-proxy-gwx4w\" (UID: \"b62c1eb0-ba37-4a26-9514-e08f7b9e4a54\") " pod="kube-system/kube-proxy-gwx4w" Dec 13 02:09:56.087564 kubelet[2306]: I1213 02:09:56.087326 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b62c1eb0-ba37-4a26-9514-e08f7b9e4a54-lib-modules\") pod \"kube-proxy-gwx4w\" (UID: \"b62c1eb0-ba37-4a26-9514-e08f7b9e4a54\") " pod="kube-system/kube-proxy-gwx4w" Dec 13 02:09:56.188724 kubelet[2306]: I1213 02:09:56.188027 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05b858c4-4812-4464-bf85-785098316a12-var-lib-calico\") pod \"tigera-operator-c7ccbd65-ws4s9\" (UID: \"05b858c4-4812-4464-bf85-785098316a12\") " pod="tigera-operator/tigera-operator-c7ccbd65-ws4s9" Dec 13 02:09:56.188724 kubelet[2306]: I1213 02:09:56.188174 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqdwc\" (UniqueName: \"kubernetes.io/projected/05b858c4-4812-4464-bf85-785098316a12-kube-api-access-lqdwc\") pod \"tigera-operator-c7ccbd65-ws4s9\" (UID: \"05b858c4-4812-4464-bf85-785098316a12\") " pod="tigera-operator/tigera-operator-c7ccbd65-ws4s9" Dec 13 02:09:56.366303 env[1343]: time="2024-12-13T02:09:56.366067676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ws4s9,Uid:05b858c4-4812-4464-bf85-785098316a12,Namespace:tigera-operator,Attempt:0,}" Dec 13 02:09:56.401358 env[1343]: time="2024-12-13T02:09:56.401227019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:56.401358 env[1343]: time="2024-12-13T02:09:56.401319169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:56.401697 env[1343]: time="2024-12-13T02:09:56.401337766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:56.402198 env[1343]: time="2024-12-13T02:09:56.402073579Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bfdcbe34681b828f8a18b77af052d542b6029243ede8de7412d28657134fb6e pid=2408 runtime=io.containerd.runc.v2 Dec 13 02:09:56.523017 env[1343]: time="2024-12-13T02:09:56.522547468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ws4s9,Uid:05b858c4-4812-4464-bf85-785098316a12,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7bfdcbe34681b828f8a18b77af052d542b6029243ede8de7412d28657134fb6e\"" Dec 13 02:09:56.526726 env[1343]: time="2024-12-13T02:09:56.526680187Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 02:09:57.197904 kubelet[2306]: E1213 02:09:57.197700 2306 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 02:09:57.197904 kubelet[2306]: E1213 02:09:57.197898 2306 projected.go:200] Error preparing data for projected volume kube-api-access-ndw7z for pod kube-system/kube-proxy-gwx4w: failed to sync configmap cache: timed out waiting for the condition Dec 13 02:09:57.199780 kubelet[2306]: E1213 02:09:57.198147 2306 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b62c1eb0-ba37-4a26-9514-e08f7b9e4a54-kube-api-access-ndw7z podName:b62c1eb0-ba37-4a26-9514-e08f7b9e4a54 nodeName:}" failed. No retries permitted until 2024-12-13 02:09:57.698108346 +0000 UTC m=+15.862402830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ndw7z" (UniqueName: "kubernetes.io/projected/b62c1eb0-ba37-4a26-9514-e08f7b9e4a54-kube-api-access-ndw7z") pod "kube-proxy-gwx4w" (UID: "b62c1eb0-ba37-4a26-9514-e08f7b9e4a54") : failed to sync configmap cache: timed out waiting for the condition Dec 13 02:09:57.302926 systemd[1]: run-containerd-runc-k8s.io-7bfdcbe34681b828f8a18b77af052d542b6029243ede8de7412d28657134fb6e-runc.C7bnAP.mount: Deactivated successfully. Dec 13 02:09:57.733052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222188482.mount: Deactivated successfully. Dec 13 02:09:57.739285 env[1343]: time="2024-12-13T02:09:57.739232283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gwx4w,Uid:b62c1eb0-ba37-4a26-9514-e08f7b9e4a54,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:57.776616 env[1343]: time="2024-12-13T02:09:57.776528444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:57.776860 env[1343]: time="2024-12-13T02:09:57.776586550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:57.776860 env[1343]: time="2024-12-13T02:09:57.776604551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:57.776860 env[1343]: time="2024-12-13T02:09:57.776787715Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fda425949c6d73725d8efe3311ce43a4ec1e8448d320b4fcdcc89772fd4dbc25 pid=2451 runtime=io.containerd.runc.v2 Dec 13 02:09:57.861897 env[1343]: time="2024-12-13T02:09:57.861832218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gwx4w,Uid:b62c1eb0-ba37-4a26-9514-e08f7b9e4a54,Namespace:kube-system,Attempt:0,} returns sandbox id \"fda425949c6d73725d8efe3311ce43a4ec1e8448d320b4fcdcc89772fd4dbc25\"" Dec 13 02:09:57.865803 env[1343]: time="2024-12-13T02:09:57.865596144Z" level=info msg="CreateContainer within sandbox \"fda425949c6d73725d8efe3311ce43a4ec1e8448d320b4fcdcc89772fd4dbc25\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:09:57.889635 env[1343]: time="2024-12-13T02:09:57.889566957Z" level=info msg="CreateContainer within sandbox \"fda425949c6d73725d8efe3311ce43a4ec1e8448d320b4fcdcc89772fd4dbc25\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7559c03d73e38935fa9b2a9f33cc621f0b9827e2b3b51f982eaf0374608ca359\"" Dec 13 02:09:57.891953 env[1343]: time="2024-12-13T02:09:57.890416117Z" level=info msg="StartContainer for \"7559c03d73e38935fa9b2a9f33cc621f0b9827e2b3b51f982eaf0374608ca359\"" Dec 13 02:09:58.006888 env[1343]: time="2024-12-13T02:09:58.006756485Z" level=info msg="StartContainer for \"7559c03d73e38935fa9b2a9f33cc621f0b9827e2b3b51f982eaf0374608ca359\" returns successfully" Dec 13 02:09:58.142000 audit[2542]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.148324 kernel: kauditd_printk_skb: 9 callbacks suppressed Dec 13 02:09:58.148505 kernel: audit: type=1325 audit(1734055798.142:225): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.152000 audit[2543]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.152000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6ec36250 a2=0 a3=7ffc6ec3623c items=0 ppid=2502 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.214178 kernel: audit: type=1325 audit(1734055798.152:226): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.214340 kernel: audit: type=1300 audit(1734055798.152:226): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6ec36250 a2=0 a3=7ffc6ec3623c items=0 ppid=2502 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 02:09:58.232503 kernel: audit: type=1327 audit(1734055798.152:226): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 02:09:58.155000 audit[2544]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.155000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe84c3f740 a2=0 a3=7ffe84c3f72c items=0 ppid=2502 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.281206 kernel: audit: type=1325 audit(1734055798.155:227): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.281462 kernel: audit: type=1300 audit(1734055798.155:227): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe84c3f740 a2=0 a3=7ffe84c3f72c items=0 ppid=2502 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 02:09:58.307570 kernel: audit: type=1327 audit(1734055798.155:227): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 02:09:58.157000 audit[2545]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.339007 kernel: audit: type=1325 audit(1734055798.157:228): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.347962 kubelet[2306]: I1213 02:09:58.342426 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gwx4w" podStartSLOduration=3.342339145 podStartE2EDuration="3.342339145s" podCreationTimestamp="2024-12-13 02:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:09:58.342207669 +0000 UTC m=+16.506502202" watchObservedRunningTime="2024-12-13 02:09:58.342339145 +0000 UTC m=+16.506633629" Dec 13 02:09:58.157000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff71980210 a2=0 a3=7fff719801fc items=0 ppid=2502 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.383515 kernel: audit: type=1300 audit(1734055798.157:228): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff71980210 a2=0 a3=7fff719801fc items=0 ppid=2502 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.383704 kernel: audit: type=1327 audit(1734055798.157:228): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 02:09:58.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 02:09:58.142000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff062b6000 a2=0 a3=7fff062b5fec items=0 ppid=2502 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.142000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 02:09:58.181000 audit[2547]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.181000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc7cd0690 a2=0 a3=7fffc7cd067c items=0 ppid=2502 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 02:09:58.191000 audit[2548]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.191000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffead45c110 a2=0 a3=7ffead45c0fc items=0 ppid=2502 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 02:09:58.248000 audit[2549]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.248000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffcb98a950 a2=0 a3=7fffcb98a93c items=0 ppid=2502 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 02:09:58.285000 audit[2551]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.285000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe34ac4b10 a2=0 a3=7ffe34ac4afc items=0 ppid=2502 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.285000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 02:09:58.311000 audit[2554]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.311000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeb674bbe0 a2=0 a3=7ffeb674bbcc items=0 ppid=2502 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.311000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 02:09:58.320000 audit[2555]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.320000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcea44cad0 a2=0 a3=7ffcea44cabc items=0 ppid=2502 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 02:09:58.347000 audit[2557]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.347000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffed1e1c8a0 a2=0 a3=7ffed1e1c88c items=0 ppid=2502 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.347000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 02:09:58.349000 audit[2558]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.349000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7308dd20 a2=0 a3=7ffd7308dd0c items=0 ppid=2502 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 02:09:58.370000 audit[2560]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.370000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc01275ef0 a2=0 a3=7ffc01275edc items=0 ppid=2502 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 02:09:58.399000 audit[2563]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.399000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd75079680 a2=0 a3=7ffd7507966c items=0 ppid=2502 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 02:09:58.402000 audit[2564]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.402000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff058c0f70 a2=0 a3=7fff058c0f5c items=0 ppid=2502 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 02:09:58.408000 audit[2566]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.408000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca5dc2850 a2=0 a3=7ffca5dc283c items=0 ppid=2502 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 02:09:58.410000 audit[2567]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.410000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5b5e8470 a2=0 a3=7ffe5b5e845c items=0 ppid=2502 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 02:09:58.416000 audit[2569]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.416000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd47ea4dc0 a2=0 a3=7ffd47ea4dac items=0 ppid=2502 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 02:09:58.424000 audit[2572]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.424000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd8d37de20 a2=0 a3=7ffd8d37de0c items=0 ppid=2502 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 02:09:58.433000 audit[2575]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.433000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff404aba10 a2=0 a3=7fff404ab9fc items=0 ppid=2502 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 02:09:58.436000 audit[2576]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.436000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed331ba30 a2=0 a3=7ffed331ba1c items=0 ppid=2502 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 02:09:58.443000 audit[2578]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.443000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffea14a7490 a2=0 a3=7ffea14a747c items=0 ppid=2502 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 02:09:58.450000 audit[2581]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.450000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffccda48400 a2=0 a3=7ffccda483ec items=0 ppid=2502 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 02:09:58.452000 audit[2582]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.452000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff11160ff0 a2=0 a3=7fff11160fdc items=0 ppid=2502 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 02:09:58.458000 audit[2584]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 02:09:58.458000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffeca02ca00 a2=0 a3=7ffeca02c9ec items=0 ppid=2502 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 02:09:58.502000 audit[2590]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:09:58.502000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd4d837c20 a2=0 a3=7ffd4d837c0c items=0 ppid=2502 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:09:58.518000 audit[2590]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:09:58.518000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd4d837c20 a2=0 a3=7ffd4d837c0c items=0 ppid=2502 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:09:58.521000 audit[2595]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.521000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc70624900 a2=0 a3=7ffc706248ec items=0 ppid=2502 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.521000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 02:09:58.526000 audit[2597]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.526000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdea760980 a2=0 a3=7ffdea76096c items=0 ppid=2502 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 02:09:58.534000 audit[2600]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.534000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe4de42240 a2=0 a3=7ffe4de4222c items=0 ppid=2502 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 02:09:58.536000 audit[2601]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.536000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd336fe820 a2=0 a3=7ffd336fe80c items=0 ppid=2502 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.536000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 02:09:58.541000 audit[2603]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.541000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd6e89db20 a2=0 a3=7ffd6e89db0c items=0 ppid=2502 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.541000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 02:09:58.552000 audit[2604]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2604 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.552000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe25b40740 a2=0 a3=7ffe25b4072c items=0 ppid=2502 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.552000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 02:09:58.557000 audit[2606]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.557000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc30749c30 a2=0 a3=7ffc30749c1c items=0 ppid=2502 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 02:09:58.564000 audit[2609]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.564000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdc959ea20 a2=0 a3=7ffdc959ea0c items=0 ppid=2502 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 02:09:58.567000 audit[2610]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.567000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6cbeed10 a2=0 a3=7ffd6cbeecfc items=0 ppid=2502 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 02:09:58.573000 audit[2612]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.573000 audit[2612]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc82091480 a2=0 a3=7ffc8209146c items=0 ppid=2502 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 02:09:58.575000 audit[2613]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.575000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd755e3b50 a2=0 a3=7ffd755e3b3c items=0 ppid=2502 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 02:09:58.580000 audit[2615]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2615 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.580000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc1c3d8ed0 a2=0 a3=7ffc1c3d8ebc items=0 ppid=2502 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 02:09:58.588000 audit[2618]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.588000 audit[2618]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfc3ce4f0 a2=0 a3=7ffcfc3ce4dc items=0 ppid=2502 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 02:09:58.597000 audit[2621]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2621 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.597000 audit[2621]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcce2be70 a2=0 a3=7fffcce2be5c items=0 ppid=2502 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 02:09:58.600000 audit[2622]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2622 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.600000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff02fd2ab0 a2=0 a3=7fff02fd2a9c items=0 ppid=2502 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 02:09:58.606000 audit[2624]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2624 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.606000 audit[2624]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc77d15a60 a2=0 a3=7ffc77d15a4c items=0 ppid=2502 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 02:09:58.613000 audit[2627]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.613000 audit[2627]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc1f509ab0 a2=0 a3=7ffc1f509a9c items=0 ppid=2502 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 02:09:58.615000 audit[2628]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.615000 audit[2628]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7ba04750 a2=0 a3=7ffe7ba0473c items=0 ppid=2502 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.615000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 02:09:58.620000 audit[2630]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.620000 audit[2630]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd92f42f30 a2=0 a3=7ffd92f42f1c items=0 ppid=2502 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.620000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 02:09:58.623000 audit[2631]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.623000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda676cbf0 a2=0 a3=7ffda676cbdc items=0 ppid=2502 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 02:09:58.629000 audit[2633]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.629000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd4bdb1810 a2=0 a3=7ffd4bdb17fc items=0 ppid=2502 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.629000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 02:09:58.636000 audit[2636]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2636 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 02:09:58.636000 audit[2636]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff8a185cc0 a2=0 a3=7fff8a185cac items=0 ppid=2502 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 02:09:58.642000 audit[2638]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 02:09:58.642000 audit[2638]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffd38c7f0c0 a2=0 a3=7ffd38c7f0ac items=0 ppid=2502 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.642000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:09:58.644000 audit[2638]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 02:09:58.644000 audit[2638]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd38c7f0c0 a2=0 a3=7ffd38c7f0ac items=0 ppid=2502 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:09:58.644000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:09:59.154096 env[1343]: time="2024-12-13T02:09:59.154027342Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:59.157923 env[1343]: time="2024-12-13T02:09:59.157869744Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:59.161051 env[1343]: time="2024-12-13T02:09:59.161004126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:59.163617 env[1343]: time="2024-12-13T02:09:59.163560922Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:59.164518 env[1343]: time="2024-12-13T02:09:59.164446889Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 02:09:59.168911 env[1343]: time="2024-12-13T02:09:59.168855852Z" level=info msg="CreateContainer within sandbox \"7bfdcbe34681b828f8a18b77af052d542b6029243ede8de7412d28657134fb6e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 02:09:59.192690 env[1343]: time="2024-12-13T02:09:59.192592368Z" level=info msg="CreateContainer within sandbox \"7bfdcbe34681b828f8a18b77af052d542b6029243ede8de7412d28657134fb6e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"01e4b6206581fd7833fadc3fde9663ffd0268a31f96e632b217abcdf91afcd4f\"" Dec 13 02:09:59.195905 env[1343]: time="2024-12-13T02:09:59.194701056Z" level=info msg="StartContainer for \"01e4b6206581fd7833fadc3fde9663ffd0268a31f96e632b217abcdf91afcd4f\"" Dec 13 02:09:59.451831 env[1343]: time="2024-12-13T02:09:59.451655255Z" level=info msg="StartContainer for \"01e4b6206581fd7833fadc3fde9663ffd0268a31f96e632b217abcdf91afcd4f\" returns successfully" Dec 13 02:10:02.414000 audit[2678]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:02.414000 audit[2678]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe85dcc740 a2=0 a3=7ffe85dcc72c items=0 ppid=2502 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:02.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:02.419000 audit[2678]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:02.419000 audit[2678]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe85dcc740 a2=0 a3=0 items=0 ppid=2502 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:02.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:02.436000 audit[2680]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:02.436000 audit[2680]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffe8472380 a2=0 a3=7fffe847236c items=0 ppid=2502 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:02.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:02.441000 audit[2680]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:02.441000 audit[2680]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe8472380 a2=0 a3=0 items=0 ppid=2502 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:02.441000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:02.550675 kubelet[2306]: I1213 02:10:02.550621 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-ws4s9" podStartSLOduration=3.910399088 podStartE2EDuration="6.550556836s" podCreationTimestamp="2024-12-13 02:09:56 +0000 UTC" firstStartedPulling="2024-12-13 02:09:56.524887113 +0000 UTC m=+14.689181583" lastFinishedPulling="2024-12-13 02:09:59.16504485 +0000 UTC m=+17.329339331" observedRunningTime="2024-12-13 02:10:00.467963613 +0000 UTC m=+18.632258107" watchObservedRunningTime="2024-12-13 02:10:02.550556836 +0000 UTC m=+20.714851326" Dec 13 02:10:02.551317 kubelet[2306]: I1213 02:10:02.550969 2306 topology_manager.go:215] "Topology Admit Handler" podUID="3636086d-9d0f-4819-9f2e-0523d5122b5b" podNamespace="calico-system" podName="calico-typha-c7497d9dd-8pqnf" Dec 13 02:10:02.719814 kubelet[2306]: I1213 02:10:02.719654 2306 topology_manager.go:215] "Topology Admit Handler" podUID="f0285ee9-b18a-45ed-843e-0cb24fda4222" podNamespace="calico-system" podName="calico-node-xnljn" Dec 13 02:10:02.745974 kubelet[2306]: I1213 02:10:02.745928 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j84d\" (UniqueName: \"kubernetes.io/projected/3636086d-9d0f-4819-9f2e-0523d5122b5b-kube-api-access-5j84d\") pod \"calico-typha-c7497d9dd-8pqnf\" (UID: \"3636086d-9d0f-4819-9f2e-0523d5122b5b\") " pod="calico-system/calico-typha-c7497d9dd-8pqnf" Dec 13 02:10:02.746182 kubelet[2306]: I1213 02:10:02.746015 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3636086d-9d0f-4819-9f2e-0523d5122b5b-typha-certs\") pod \"calico-typha-c7497d9dd-8pqnf\" (UID: \"3636086d-9d0f-4819-9f2e-0523d5122b5b\") " pod="calico-system/calico-typha-c7497d9dd-8pqnf" Dec 13 02:10:02.746182 kubelet[2306]: I1213 02:10:02.746068 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3636086d-9d0f-4819-9f2e-0523d5122b5b-tigera-ca-bundle\") pod \"calico-typha-c7497d9dd-8pqnf\" (UID: \"3636086d-9d0f-4819-9f2e-0523d5122b5b\") " pod="calico-system/calico-typha-c7497d9dd-8pqnf" Dec 13 02:10:02.846433 kubelet[2306]: I1213 02:10:02.846311 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f0285ee9-b18a-45ed-843e-0cb24fda4222-flexvol-driver-host\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.846638 kubelet[2306]: I1213 02:10:02.846467 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f0285ee9-b18a-45ed-843e-0cb24fda4222-var-run-calico\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.846638 kubelet[2306]: I1213 02:10:02.846551 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0285ee9-b18a-45ed-843e-0cb24fda4222-lib-modules\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.846782 kubelet[2306]: I1213 02:10:02.846641 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f0285ee9-b18a-45ed-843e-0cb24fda4222-cni-bin-dir\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.846871 kubelet[2306]: I1213 02:10:02.846797 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0285ee9-b18a-45ed-843e-0cb24fda4222-tigera-ca-bundle\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.847574 kubelet[2306]: I1213 02:10:02.847509 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f0285ee9-b18a-45ed-843e-0cb24fda4222-cni-net-dir\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.847906 kubelet[2306]: I1213 02:10:02.847868 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f0285ee9-b18a-45ed-843e-0cb24fda4222-cni-log-dir\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.848082 kubelet[2306]: I1213 02:10:02.848064 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f0285ee9-b18a-45ed-843e-0cb24fda4222-var-lib-calico\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.848273 kubelet[2306]: I1213 02:10:02.848256 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5t7w\" (UniqueName: \"kubernetes.io/projected/f0285ee9-b18a-45ed-843e-0cb24fda4222-kube-api-access-m5t7w\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.848448 kubelet[2306]: I1213 02:10:02.848429 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0285ee9-b18a-45ed-843e-0cb24fda4222-xtables-lock\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.848630 kubelet[2306]: I1213 02:10:02.848612 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f0285ee9-b18a-45ed-843e-0cb24fda4222-policysync\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.849057 kubelet[2306]: I1213 02:10:02.849025 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f0285ee9-b18a-45ed-843e-0cb24fda4222-node-certs\") pod \"calico-node-xnljn\" (UID: \"f0285ee9-b18a-45ed-843e-0cb24fda4222\") " pod="calico-system/calico-node-xnljn" Dec 13 02:10:02.952876 kubelet[2306]: E1213 02:10:02.952841 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.953130 kubelet[2306]: W1213 02:10:02.953105 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.953304 kubelet[2306]: E1213 02:10:02.953284 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.953714 kubelet[2306]: E1213 02:10:02.953678 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.953714 kubelet[2306]: W1213 02:10:02.953702 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.953901 kubelet[2306]: E1213 02:10:02.953726 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.956368 kubelet[2306]: E1213 02:10:02.956348 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.956566 kubelet[2306]: W1213 02:10:02.956547 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.956807 kubelet[2306]: E1213 02:10:02.956792 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.960583 kubelet[2306]: E1213 02:10:02.960560 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.960735 kubelet[2306]: W1213 02:10:02.960715 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.961036 kubelet[2306]: E1213 02:10:02.961017 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.975961 kubelet[2306]: E1213 02:10:02.972354 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.976491 kubelet[2306]: W1213 02:10:02.976458 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.981938 kubelet[2306]: E1213 02:10:02.981911 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.982123 kubelet[2306]: W1213 02:10:02.982099 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.992301 kubelet[2306]: E1213 02:10:02.992272 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.992576 kubelet[2306]: E1213 02:10:02.992546 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.994905 kubelet[2306]: E1213 02:10:02.994883 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.995078 kubelet[2306]: W1213 02:10:02.995055 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.995428 kubelet[2306]: E1213 02:10:02.995409 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.995869 kubelet[2306]: E1213 02:10:02.995854 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.995996 kubelet[2306]: W1213 02:10:02.995982 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.996211 kubelet[2306]: E1213 02:10:02.996196 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.996443 kubelet[2306]: E1213 02:10:02.996420 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.996573 kubelet[2306]: W1213 02:10:02.996554 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.996835 kubelet[2306]: E1213 02:10:02.996819 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.997230 kubelet[2306]: E1213 02:10:02.997204 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.998238 kubelet[2306]: W1213 02:10:02.998187 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.998568 kubelet[2306]: E1213 02:10:02.998550 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.998961 kubelet[2306]: E1213 02:10:02.998935 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.999095 kubelet[2306]: W1213 02:10:02.999076 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:02.999348 kubelet[2306]: E1213 02:10:02.999333 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:02.999767 kubelet[2306]: E1213 02:10:02.999750 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:02.999905 kubelet[2306]: W1213 02:10:02.999886 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.000225 kubelet[2306]: E1213 02:10:03.000208 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.000697 kubelet[2306]: E1213 02:10:03.000663 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.000831 kubelet[2306]: W1213 02:10:03.000813 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.001079 kubelet[2306]: E1213 02:10:03.001065 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.001457 kubelet[2306]: E1213 02:10:03.001441 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.001592 kubelet[2306]: W1213 02:10:03.001573 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.001982 kubelet[2306]: E1213 02:10:03.001963 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.003116 kubelet[2306]: E1213 02:10:03.003096 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.003827 kubelet[2306]: W1213 02:10:03.003805 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.004249 kubelet[2306]: E1213 02:10:03.004232 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.004380 kubelet[2306]: W1213 02:10:03.004362 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.004692 kubelet[2306]: E1213 02:10:03.004668 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.004901 kubelet[2306]: E1213 02:10:03.004877 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.005266 kubelet[2306]: E1213 02:10:03.005251 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.005451 kubelet[2306]: W1213 02:10:03.005377 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.005580 kubelet[2306]: E1213 02:10:03.005565 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.006063 kubelet[2306]: E1213 02:10:03.006046 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.006216 kubelet[2306]: W1213 02:10:03.006197 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.006365 kubelet[2306]: E1213 02:10:03.006350 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.007750 kubelet[2306]: E1213 02:10:03.007721 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.009344 kubelet[2306]: W1213 02:10:03.009321 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.009517 kubelet[2306]: E1213 02:10:03.009500 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.027521 env[1343]: time="2024-12-13T02:10:03.026819459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xnljn,Uid:f0285ee9-b18a-45ed-843e-0cb24fda4222,Namespace:calico-system,Attempt:0,}" Dec 13 02:10:03.031068 kubelet[2306]: I1213 02:10:03.031025 2306 topology_manager.go:215] "Topology Admit Handler" podUID="58caca33-88e9-4a41-9735-56d04f40c4b1" podNamespace="calico-system" podName="csi-node-driver-mc2dz" Dec 13 02:10:03.032517 kubelet[2306]: E1213 02:10:03.031892 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mc2dz" podUID="58caca33-88e9-4a41-9735-56d04f40c4b1" Dec 13 02:10:03.060070 kubelet[2306]: E1213 02:10:03.060036 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.060311 kubelet[2306]: W1213 02:10:03.060283 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.060542 kubelet[2306]: E1213 02:10:03.060509 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.068097 kubelet[2306]: E1213 02:10:03.068060 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.068097 kubelet[2306]: W1213 02:10:03.068095 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.068333 kubelet[2306]: E1213 02:10:03.068125 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.068456 kubelet[2306]: E1213 02:10:03.068437 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.068548 kubelet[2306]: W1213 02:10:03.068457 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.068548 kubelet[2306]: E1213 02:10:03.068478 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.068780 kubelet[2306]: E1213 02:10:03.068759 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.068780 kubelet[2306]: W1213 02:10:03.068780 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.068965 kubelet[2306]: E1213 02:10:03.068800 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.069155 kubelet[2306]: E1213 02:10:03.069115 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.069155 kubelet[2306]: W1213 02:10:03.069131 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.069155 kubelet[2306]: E1213 02:10:03.069151 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.069579 kubelet[2306]: E1213 02:10:03.069438 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.069579 kubelet[2306]: W1213 02:10:03.069455 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.069579 kubelet[2306]: E1213 02:10:03.069474 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.069795 kubelet[2306]: E1213 02:10:03.069714 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.069795 kubelet[2306]: W1213 02:10:03.069737 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.069795 kubelet[2306]: E1213 02:10:03.069760 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.070030 kubelet[2306]: E1213 02:10:03.070008 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.070030 kubelet[2306]: W1213 02:10:03.070025 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.070186 kubelet[2306]: E1213 02:10:03.070044 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.070373 kubelet[2306]: E1213 02:10:03.070355 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.070373 kubelet[2306]: W1213 02:10:03.070372 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.070552 kubelet[2306]: E1213 02:10:03.070411 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.070963 kubelet[2306]: E1213 02:10:03.070921 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.071131 kubelet[2306]: W1213 02:10:03.071109 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.071493 kubelet[2306]: E1213 02:10:03.071466 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.071979 kubelet[2306]: E1213 02:10:03.071961 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.072105 kubelet[2306]: W1213 02:10:03.072088 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.072224 kubelet[2306]: E1213 02:10:03.072196 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.078513 kubelet[2306]: E1213 02:10:03.078491 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.078681 kubelet[2306]: W1213 02:10:03.078659 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.078814 kubelet[2306]: E1213 02:10:03.078797 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.079708 kubelet[2306]: E1213 02:10:03.079682 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.079708 kubelet[2306]: W1213 02:10:03.079707 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.079895 kubelet[2306]: E1213 02:10:03.079738 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.080043 kubelet[2306]: E1213 02:10:03.080022 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.080123 kubelet[2306]: W1213 02:10:03.080048 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.080123 kubelet[2306]: E1213 02:10:03.080068 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.080359 kubelet[2306]: E1213 02:10:03.080340 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.080465 kubelet[2306]: W1213 02:10:03.080360 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.080465 kubelet[2306]: E1213 02:10:03.080380 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.084019 kubelet[2306]: E1213 02:10:03.083994 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.084019 kubelet[2306]: W1213 02:10:03.084019 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.084219 kubelet[2306]: E1213 02:10:03.084041 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.084397 kubelet[2306]: E1213 02:10:03.084364 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.084489 kubelet[2306]: W1213 02:10:03.084416 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.084489 kubelet[2306]: E1213 02:10:03.084439 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.088647 kubelet[2306]: E1213 02:10:03.088627 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.088647 kubelet[2306]: W1213 02:10:03.088646 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.088833 kubelet[2306]: E1213 02:10:03.088670 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.088899 env[1343]: time="2024-12-13T02:10:03.080706319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:03.088899 env[1343]: time="2024-12-13T02:10:03.080770850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:03.088899 env[1343]: time="2024-12-13T02:10:03.080800100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:03.088899 env[1343]: time="2024-12-13T02:10:03.081000404Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5d81fc9ac869d0b1ea49114dff4d6298c16cbb95d1f7f4a78c34b9214c0748b pid=2717 runtime=io.containerd.runc.v2 Dec 13 02:10:03.091534 kubelet[2306]: E1213 02:10:03.091510 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.091534 kubelet[2306]: W1213 02:10:03.091534 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.091714 kubelet[2306]: E1213 02:10:03.091556 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.093124 kubelet[2306]: E1213 02:10:03.091984 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.093124 kubelet[2306]: W1213 02:10:03.092000 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.093124 kubelet[2306]: E1213 02:10:03.092023 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.093124 kubelet[2306]: E1213 02:10:03.092469 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.093124 kubelet[2306]: W1213 02:10:03.092485 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.093124 kubelet[2306]: E1213 02:10:03.092507 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.093124 kubelet[2306]: I1213 02:10:03.092552 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58caca33-88e9-4a41-9735-56d04f40c4b1-socket-dir\") pod \"csi-node-driver-mc2dz\" (UID: \"58caca33-88e9-4a41-9735-56d04f40c4b1\") " pod="calico-system/csi-node-driver-mc2dz" Dec 13 02:10:03.093124 kubelet[2306]: E1213 02:10:03.092904 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.093124 kubelet[2306]: W1213 02:10:03.092921 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.093672 kubelet[2306]: E1213 02:10:03.092949 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.093672 kubelet[2306]: I1213 02:10:03.092987 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/58caca33-88e9-4a41-9735-56d04f40c4b1-varrun\") pod \"csi-node-driver-mc2dz\" (UID: \"58caca33-88e9-4a41-9735-56d04f40c4b1\") " pod="calico-system/csi-node-driver-mc2dz" Dec 13 02:10:03.098118 kubelet[2306]: E1213 02:10:03.096623 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.098118 kubelet[2306]: W1213 02:10:03.096643 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.098118 kubelet[2306]: E1213 02:10:03.096671 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.098118 kubelet[2306]: I1213 02:10:03.096706 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58caca33-88e9-4a41-9735-56d04f40c4b1-kubelet-dir\") pod \"csi-node-driver-mc2dz\" (UID: \"58caca33-88e9-4a41-9735-56d04f40c4b1\") " pod="calico-system/csi-node-driver-mc2dz" Dec 13 02:10:03.098118 kubelet[2306]: E1213 02:10:03.097068 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.098118 kubelet[2306]: W1213 02:10:03.097084 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.098118 kubelet[2306]: E1213 02:10:03.097242 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.098118 kubelet[2306]: I1213 02:10:03.097281 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58caca33-88e9-4a41-9735-56d04f40c4b1-registration-dir\") pod \"csi-node-driver-mc2dz\" (UID: \"58caca33-88e9-4a41-9735-56d04f40c4b1\") " pod="calico-system/csi-node-driver-mc2dz" Dec 13 02:10:03.098118 kubelet[2306]: E1213 02:10:03.097556 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.098694 kubelet[2306]: W1213 02:10:03.097570 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.098694 kubelet[2306]: E1213 02:10:03.097764 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.098694 kubelet[2306]: E1213 02:10:03.097984 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.098694 kubelet[2306]: W1213 02:10:03.097995 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.099302 kubelet[2306]: E1213 02:10:03.098931 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.099302 kubelet[2306]: E1213 02:10:03.099144 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.099302 kubelet[2306]: W1213 02:10:03.099157 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.099302 kubelet[2306]: E1213 02:10:03.099279 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.099797 kubelet[2306]: E1213 02:10:03.099780 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.099925 kubelet[2306]: W1213 02:10:03.099907 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.100167 kubelet[2306]: E1213 02:10:03.100149 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.100337 kubelet[2306]: I1213 02:10:03.100320 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9frc\" (UniqueName: \"kubernetes.io/projected/58caca33-88e9-4a41-9735-56d04f40c4b1-kube-api-access-n9frc\") pod \"csi-node-driver-mc2dz\" (UID: \"58caca33-88e9-4a41-9735-56d04f40c4b1\") " pod="calico-system/csi-node-driver-mc2dz" Dec 13 02:10:03.100669 kubelet[2306]: E1213 02:10:03.100653 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.100793 kubelet[2306]: W1213 02:10:03.100775 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.100997 kubelet[2306]: E1213 02:10:03.100983 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.101430 kubelet[2306]: E1213 02:10:03.101368 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.101567 kubelet[2306]: W1213 02:10:03.101548 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.101706 kubelet[2306]: E1213 02:10:03.101687 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.102207 kubelet[2306]: E1213 02:10:03.102189 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.102339 kubelet[2306]: W1213 02:10:03.102319 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.102492 kubelet[2306]: E1213 02:10:03.102477 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.102881 kubelet[2306]: E1213 02:10:03.102865 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.103017 kubelet[2306]: W1213 02:10:03.102999 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.103125 kubelet[2306]: E1213 02:10:03.103111 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.103579 kubelet[2306]: E1213 02:10:03.103539 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.107923 kubelet[2306]: W1213 02:10:03.107898 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.108071 kubelet[2306]: E1213 02:10:03.108056 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.108525 kubelet[2306]: E1213 02:10:03.108507 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.108671 kubelet[2306]: W1213 02:10:03.108652 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.108800 kubelet[2306]: E1213 02:10:03.108783 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.109200 kubelet[2306]: E1213 02:10:03.109181 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.109326 kubelet[2306]: W1213 02:10:03.109307 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.109495 kubelet[2306]: E1213 02:10:03.109476 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.163617 env[1343]: time="2024-12-13T02:10:03.163559496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c7497d9dd-8pqnf,Uid:3636086d-9d0f-4819-9f2e-0523d5122b5b,Namespace:calico-system,Attempt:0,}" Dec 13 02:10:03.212143 kubelet[2306]: E1213 02:10:03.210726 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.212143 kubelet[2306]: W1213 02:10:03.210781 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.212143 kubelet[2306]: E1213 02:10:03.210813 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.212143 kubelet[2306]: E1213 02:10:03.211361 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.212143 kubelet[2306]: W1213 02:10:03.211378 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.212143 kubelet[2306]: E1213 02:10:03.211423 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.212143 kubelet[2306]: E1213 02:10:03.211877 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.212143 kubelet[2306]: W1213 02:10:03.211892 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.212143 kubelet[2306]: E1213 02:10:03.211949 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.213479 kubelet[2306]: E1213 02:10:03.212974 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.213761 kubelet[2306]: W1213 02:10:03.213564 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.213761 kubelet[2306]: E1213 02:10:03.213651 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.214670 kubelet[2306]: E1213 02:10:03.214544 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.214670 kubelet[2306]: W1213 02:10:03.214590 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.214942 kubelet[2306]: E1213 02:10:03.214771 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.215454 kubelet[2306]: E1213 02:10:03.215362 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.215717 kubelet[2306]: W1213 02:10:03.215646 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.215975 kubelet[2306]: E1213 02:10:03.215958 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.216592 kubelet[2306]: E1213 02:10:03.216572 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.216768 kubelet[2306]: W1213 02:10:03.216747 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.217037 kubelet[2306]: E1213 02:10:03.217018 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.217639 kubelet[2306]: E1213 02:10:03.217621 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.217865 kubelet[2306]: W1213 02:10:03.217823 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.218113 kubelet[2306]: E1213 02:10:03.218097 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.218764 kubelet[2306]: E1213 02:10:03.218716 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.218964 kubelet[2306]: W1213 02:10:03.218943 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.219210 kubelet[2306]: E1213 02:10:03.219193 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.219994 kubelet[2306]: E1213 02:10:03.219962 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.220156 kubelet[2306]: W1213 02:10:03.220135 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.220476 kubelet[2306]: E1213 02:10:03.220429 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.221851 kubelet[2306]: E1213 02:10:03.221830 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.223482 kubelet[2306]: W1213 02:10:03.223455 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.223877 kubelet[2306]: E1213 02:10:03.223859 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.224318 kubelet[2306]: E1213 02:10:03.224298 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.224543 kubelet[2306]: W1213 02:10:03.224522 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.224884 kubelet[2306]: E1213 02:10:03.224866 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.225674 kubelet[2306]: E1213 02:10:03.225654 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.233634 kubelet[2306]: W1213 02:10:03.225963 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.235065 kubelet[2306]: E1213 02:10:03.235037 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.237480 kubelet[2306]: E1213 02:10:03.237454 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.239722 kubelet[2306]: W1213 02:10:03.239556 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.240568 kubelet[2306]: E1213 02:10:03.240546 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.243062 kubelet[2306]: E1213 02:10:03.242786 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.245577 kubelet[2306]: W1213 02:10:03.245541 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.245983 kubelet[2306]: E1213 02:10:03.245961 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.248920 kubelet[2306]: E1213 02:10:03.248895 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.249142 kubelet[2306]: W1213 02:10:03.249102 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.249473 kubelet[2306]: E1213 02:10:03.249441 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.251135 kubelet[2306]: E1213 02:10:03.251114 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.251294 kubelet[2306]: W1213 02:10:03.251272 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.251680 kubelet[2306]: E1213 02:10:03.251660 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.253127 kubelet[2306]: E1213 02:10:03.253109 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.256048 kubelet[2306]: W1213 02:10:03.256014 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.256457 kubelet[2306]: E1213 02:10:03.256424 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.256880 kubelet[2306]: E1213 02:10:03.256861 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.257039 kubelet[2306]: W1213 02:10:03.257019 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.257352 kubelet[2306]: E1213 02:10:03.257334 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.257864 kubelet[2306]: E1213 02:10:03.257845 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.258013 kubelet[2306]: W1213 02:10:03.257995 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.258313 kubelet[2306]: E1213 02:10:03.258295 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.258805 kubelet[2306]: E1213 02:10:03.258776 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.258978 kubelet[2306]: W1213 02:10:03.258958 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.259299 kubelet[2306]: E1213 02:10:03.259266 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.259775 kubelet[2306]: E1213 02:10:03.259756 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.259935 kubelet[2306]: W1213 02:10:03.259915 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.262040 kubelet[2306]: E1213 02:10:03.262002 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.262218 kubelet[2306]: W1213 02:10:03.262196 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.262847 kubelet[2306]: E1213 02:10:03.262764 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.263001 kubelet[2306]: W1213 02:10:03.262982 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.263173 kubelet[2306]: E1213 02:10:03.263154 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.263360 kubelet[2306]: E1213 02:10:03.263342 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.263584 kubelet[2306]: E1213 02:10:03.263567 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.264250 kubelet[2306]: E1213 02:10:03.264221 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.264480 kubelet[2306]: W1213 02:10:03.264459 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.264636 kubelet[2306]: E1213 02:10:03.264619 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.270063 env[1343]: time="2024-12-13T02:10:03.269980030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:03.270358 env[1343]: time="2024-12-13T02:10:03.270323166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:03.270515 env[1343]: time="2024-12-13T02:10:03.270484914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:03.270874 env[1343]: time="2024-12-13T02:10:03.270829720Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d631ad7cecb88fcbefc98054f2fbc3bb2b19d598afa1b4ea0603acf63143d07 pid=2816 runtime=io.containerd.runc.v2 Dec 13 02:10:03.292337 systemd[1]: Started sshd@8-10.128.0.48:22-218.92.0.190:45697.service. Dec 13 02:10:03.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.48:22-218.92.0.190:45697 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:03.298165 kernel: kauditd_printk_skb: 155 callbacks suppressed Dec 13 02:10:03.298256 kernel: audit: type=1130 audit(1734055803.291:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.48:22-218.92.0.190:45697 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:03.310880 env[1343]: time="2024-12-13T02:10:03.310810822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xnljn,Uid:f0285ee9-b18a-45ed-843e-0cb24fda4222,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5d81fc9ac869d0b1ea49114dff4d6298c16cbb95d1f7f4a78c34b9214c0748b\"" Dec 13 02:10:03.323921 env[1343]: time="2024-12-13T02:10:03.323880010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 02:10:03.326500 kubelet[2306]: E1213 02:10:03.326455 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.326500 kubelet[2306]: W1213 02:10:03.326478 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.326712 kubelet[2306]: E1213 02:10:03.326514 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.327006 kubelet[2306]: E1213 02:10:03.326985 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.327006 kubelet[2306]: W1213 02:10:03.327007 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.327176 kubelet[2306]: E1213 02:10:03.327030 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.328157 kubelet[2306]: E1213 02:10:03.327685 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.328157 kubelet[2306]: W1213 02:10:03.327702 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.328157 kubelet[2306]: E1213 02:10:03.327724 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.328157 kubelet[2306]: E1213 02:10:03.328014 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.328157 kubelet[2306]: W1213 02:10:03.328030 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.328157 kubelet[2306]: E1213 02:10:03.328051 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.330153 kubelet[2306]: E1213 02:10:03.329181 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.330153 kubelet[2306]: W1213 02:10:03.329197 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.330153 kubelet[2306]: E1213 02:10:03.329218 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.330153 kubelet[2306]: E1213 02:10:03.329605 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.330153 kubelet[2306]: W1213 02:10:03.329618 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.330153 kubelet[2306]: E1213 02:10:03.329638 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.330713 kubelet[2306]: E1213 02:10:03.330575 2306 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:03.330713 kubelet[2306]: W1213 02:10:03.330590 2306 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:03.330713 kubelet[2306]: E1213 02:10:03.330611 2306 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:03.452999 env[1343]: time="2024-12-13T02:10:03.452940503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c7497d9dd-8pqnf,Uid:3636086d-9d0f-4819-9f2e-0523d5122b5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d631ad7cecb88fcbefc98054f2fbc3bb2b19d598afa1b4ea0603acf63143d07\"" Dec 13 02:10:03.453000 audit[2866]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:03.472440 kernel: audit: type=1325 audit(1734055803.453:281): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:03.453000 audit[2866]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffdb17698d0 a2=0 a3=7ffdb17698bc items=0 ppid=2502 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:03.508429 kernel: audit: type=1300 audit(1734055803.453:281): arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffdb17698d0 a2=0 a3=7ffdb17698bc items=0 ppid=2502 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:03.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:03.507000 audit[2866]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:03.542580 kernel: audit: type=1327 audit(1734055803.453:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:03.542734 kernel: audit: type=1325 audit(1734055803.507:282): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:03.507000 audit[2866]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb17698d0 a2=0 a3=0 items=0 ppid=2502 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:03.574981 kernel: audit: type=1300 audit(1734055803.507:282): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb17698d0 a2=0 a3=0 items=0 ppid=2502 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:03.575158 kernel: audit: type=1327 audit(1734055803.507:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:03.507000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:04.484002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1784299817.mount: Deactivated successfully. Dec 13 02:10:04.699361 env[1343]: time="2024-12-13T02:10:04.699290524Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:04.702071 env[1343]: time="2024-12-13T02:10:04.702026374Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:04.704231 env[1343]: time="2024-12-13T02:10:04.704191721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:04.707184 env[1343]: time="2024-12-13T02:10:04.707122310Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:04.708042 env[1343]: time="2024-12-13T02:10:04.707999457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 02:10:04.710682 env[1343]: time="2024-12-13T02:10:04.710628280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 02:10:04.713902 env[1343]: time="2024-12-13T02:10:04.713844145Z" level=info msg="CreateContainer within sandbox \"d5d81fc9ac869d0b1ea49114dff4d6298c16cbb95d1f7f4a78c34b9214c0748b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:10:04.742825 env[1343]: time="2024-12-13T02:10:04.742671497Z" level=info msg="CreateContainer within sandbox \"d5d81fc9ac869d0b1ea49114dff4d6298c16cbb95d1f7f4a78c34b9214c0748b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bc7276e31fa4789e75d3b08ecb28d154e6ba1d0b9164b0777508a4ba8be8125f\"" Dec 13 02:10:04.745216 env[1343]: time="2024-12-13T02:10:04.743862246Z" level=info msg="StartContainer for \"bc7276e31fa4789e75d3b08ecb28d154e6ba1d0b9164b0777508a4ba8be8125f\"" Dec 13 02:10:04.838822 env[1343]: time="2024-12-13T02:10:04.838182379Z" level=info msg="StartContainer for \"bc7276e31fa4789e75d3b08ecb28d154e6ba1d0b9164b0777508a4ba8be8125f\" returns successfully" Dec 13 02:10:04.895024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc7276e31fa4789e75d3b08ecb28d154e6ba1d0b9164b0777508a4ba8be8125f-rootfs.mount: Deactivated successfully. Dec 13 02:10:05.131249 kubelet[2306]: E1213 02:10:05.130306 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mc2dz" podUID="58caca33-88e9-4a41-9735-56d04f40c4b1" Dec 13 02:10:05.250478 env[1343]: time="2024-12-13T02:10:05.250419453Z" level=info msg="shim disconnected" id=bc7276e31fa4789e75d3b08ecb28d154e6ba1d0b9164b0777508a4ba8be8125f Dec 13 02:10:05.250894 env[1343]: time="2024-12-13T02:10:05.250854665Z" level=warning msg="cleaning up after shim disconnected" id=bc7276e31fa4789e75d3b08ecb28d154e6ba1d0b9164b0777508a4ba8be8125f namespace=k8s.io Dec 13 02:10:05.250894 env[1343]: time="2024-12-13T02:10:05.250886612Z" level=info msg="cleaning up dead shim" Dec 13 02:10:05.264673 env[1343]: time="2024-12-13T02:10:05.264623780Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2914 runtime=io.containerd.runc.v2\n" Dec 13 02:10:06.165539 kernel: audit: type=1100 audit(1734055806.139:283): pid=2841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:10:06.139000 audit[2841]: USER_AUTH pid=2841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:10:06.167579 sshd[2841]: Failed password for root from 218.92.0.190 port 45697 ssh2 Dec 13 02:10:06.410000 audit[2841]: USER_AUTH pid=2841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:10:06.436331 sshd[2841]: Failed password for root from 218.92.0.190 port 45697 ssh2 Dec 13 02:10:06.436456 kernel: audit: type=1100 audit(1734055806.410:284): pid=2841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:10:06.682000 audit[2841]: USER_AUTH pid=2841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:10:06.709246 sshd[2841]: Failed password for root from 218.92.0.190 port 45697 ssh2 Dec 13 02:10:06.709404 kernel: audit: type=1100 audit(1734055806.682:285): pid=2841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:10:07.030075 env[1343]: time="2024-12-13T02:10:07.029608751Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:07.037894 env[1343]: time="2024-12-13T02:10:07.037851530Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:07.041916 env[1343]: time="2024-12-13T02:10:07.041849867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:07.045576 env[1343]: time="2024-12-13T02:10:07.045538440Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:07.046802 env[1343]: time="2024-12-13T02:10:07.046746950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 02:10:07.066918 env[1343]: time="2024-12-13T02:10:07.066866975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 02:10:07.082193 env[1343]: time="2024-12-13T02:10:07.082142240Z" level=info msg="CreateContainer within sandbox \"0d631ad7cecb88fcbefc98054f2fbc3bb2b19d598afa1b4ea0603acf63143d07\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:10:07.108206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627270604.mount: Deactivated successfully. Dec 13 02:10:07.114053 env[1343]: time="2024-12-13T02:10:07.113984141Z" level=info msg="CreateContainer within sandbox \"0d631ad7cecb88fcbefc98054f2fbc3bb2b19d598afa1b4ea0603acf63143d07\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"059c3798c650b9a26e750d70fef09568986aa53ba9a4381471e00ff2134cd0d4\"" Dec 13 02:10:07.116042 env[1343]: time="2024-12-13T02:10:07.114893484Z" level=info msg="StartContainer for \"059c3798c650b9a26e750d70fef09568986aa53ba9a4381471e00ff2134cd0d4\"" Dec 13 02:10:07.130105 kubelet[2306]: E1213 02:10:07.130050 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mc2dz" podUID="58caca33-88e9-4a41-9735-56d04f40c4b1" Dec 13 02:10:07.239763 env[1343]: time="2024-12-13T02:10:07.239693775Z" level=info msg="StartContainer for \"059c3798c650b9a26e750d70fef09568986aa53ba9a4381471e00ff2134cd0d4\" returns successfully" Dec 13 02:10:07.493584 kubelet[2306]: I1213 02:10:07.492928 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-c7497d9dd-8pqnf" podStartSLOduration=1.901813499 podStartE2EDuration="5.492864535s" podCreationTimestamp="2024-12-13 02:10:02 +0000 UTC" firstStartedPulling="2024-12-13 02:10:03.457143485 +0000 UTC m=+21.621437967" lastFinishedPulling="2024-12-13 02:10:07.048194527 +0000 UTC m=+25.212489003" observedRunningTime="2024-12-13 02:10:07.492743141 +0000 UTC m=+25.657037632" watchObservedRunningTime="2024-12-13 02:10:07.492864535 +0000 UTC m=+25.657159016" Dec 13 02:10:07.752693 sshd[2841]: Received disconnect from 218.92.0.190 port 45697:11: [preauth] Dec 13 02:10:07.752693 sshd[2841]: Disconnected from authenticating user root 218.92.0.190 port 45697 [preauth] Dec 13 02:10:07.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.48:22-218.92.0.190:45697 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:07.754324 systemd[1]: sshd@8-10.128.0.48:22-218.92.0.190:45697.service: Deactivated successfully. Dec 13 02:10:08.484352 kubelet[2306]: I1213 02:10:08.482798 2306 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:10:09.130612 kubelet[2306]: E1213 02:10:09.130565 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mc2dz" podUID="58caca33-88e9-4a41-9735-56d04f40c4b1" Dec 13 02:10:11.132511 kubelet[2306]: E1213 02:10:11.130613 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mc2dz" podUID="58caca33-88e9-4a41-9735-56d04f40c4b1" Dec 13 02:10:11.658886 env[1343]: time="2024-12-13T02:10:11.658825164Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:11.663280 env[1343]: time="2024-12-13T02:10:11.663229983Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:11.666980 env[1343]: time="2024-12-13T02:10:11.666915277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:11.670115 env[1343]: time="2024-12-13T02:10:11.670075127Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:11.673113 env[1343]: time="2024-12-13T02:10:11.671298034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 02:10:11.675660 env[1343]: time="2024-12-13T02:10:11.675455851Z" level=info msg="CreateContainer within sandbox \"d5d81fc9ac869d0b1ea49114dff4d6298c16cbb95d1f7f4a78c34b9214c0748b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:10:11.705490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437753510.mount: Deactivated successfully. Dec 13 02:10:11.709783 env[1343]: time="2024-12-13T02:10:11.709140747Z" level=info msg="CreateContainer within sandbox \"d5d81fc9ac869d0b1ea49114dff4d6298c16cbb95d1f7f4a78c34b9214c0748b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"46114e0a010f2c615d78dee249183baadb72db876cc206769ef2f0963e069fcd\"" Dec 13 02:10:11.710216 env[1343]: time="2024-12-13T02:10:11.710177075Z" level=info msg="StartContainer for \"46114e0a010f2c615d78dee249183baadb72db876cc206769ef2f0963e069fcd\"" Dec 13 02:10:11.768835 systemd[1]: run-containerd-runc-k8s.io-46114e0a010f2c615d78dee249183baadb72db876cc206769ef2f0963e069fcd-runc.gK6SXK.mount: Deactivated successfully. Dec 13 02:10:11.848780 env[1343]: time="2024-12-13T02:10:11.848714529Z" level=info msg="StartContainer for \"46114e0a010f2c615d78dee249183baadb72db876cc206769ef2f0963e069fcd\" returns successfully" Dec 13 02:10:12.835484 env[1343]: time="2024-12-13T02:10:12.835400795Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:10:12.865554 kubelet[2306]: I1213 02:10:12.864516 2306 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:10:12.871166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46114e0a010f2c615d78dee249183baadb72db876cc206769ef2f0963e069fcd-rootfs.mount: Deactivated successfully. Dec 13 02:10:12.905296 kubelet[2306]: I1213 02:10:12.905236 2306 topology_manager.go:215] "Topology Admit Handler" podUID="755a7ddd-d1f9-477d-b8ad-3e9f709e61fd" podNamespace="kube-system" podName="coredns-76f75df574-7mq9l" Dec 13 02:10:12.922708 kubelet[2306]: I1213 02:10:12.922663 2306 topology_manager.go:215] "Topology Admit Handler" podUID="1f0b368d-f96a-4022-88da-c258681fa6eb" podNamespace="kube-system" podName="coredns-76f75df574-c5mt8" Dec 13 02:10:12.923906 kubelet[2306]: I1213 02:10:12.923864 2306 topology_manager.go:215] "Topology Admit Handler" podUID="80fc8685-542f-4623-8a52-98ad685ebdfb" podNamespace="calico-system" podName="calico-kube-controllers-758847f549-2wzrz" Dec 13 02:10:12.928548 kubelet[2306]: I1213 02:10:12.928516 2306 topology_manager.go:215] "Topology Admit Handler" podUID="0e610094-b7bd-43b3-a038-f2b1fd75f780" podNamespace="calico-apiserver" podName="calico-apiserver-77fb7456f4-8nsvc" Dec 13 02:10:12.931944 kubelet[2306]: I1213 02:10:12.931918 2306 topology_manager.go:215] "Topology Admit Handler" podUID="77407504-933f-4624-af4c-dd5aec0d5323" podNamespace="calico-apiserver" podName="calico-apiserver-77fb7456f4-pgkrc" Dec 13 02:10:12.937728 kubelet[2306]: I1213 02:10:12.931499 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9wwr\" (UniqueName: \"kubernetes.io/projected/755a7ddd-d1f9-477d-b8ad-3e9f709e61fd-kube-api-access-b9wwr\") pod \"coredns-76f75df574-7mq9l\" (UID: \"755a7ddd-d1f9-477d-b8ad-3e9f709e61fd\") " pod="kube-system/coredns-76f75df574-7mq9l" Dec 13 02:10:12.937728 kubelet[2306]: I1213 02:10:12.933913 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/755a7ddd-d1f9-477d-b8ad-3e9f709e61fd-config-volume\") pod \"coredns-76f75df574-7mq9l\" (UID: \"755a7ddd-d1f9-477d-b8ad-3e9f709e61fd\") " pod="kube-system/coredns-76f75df574-7mq9l" Dec 13 02:10:13.035147 kubelet[2306]: I1213 02:10:13.035089 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77407504-933f-4624-af4c-dd5aec0d5323-calico-apiserver-certs\") pod \"calico-apiserver-77fb7456f4-pgkrc\" (UID: \"77407504-933f-4624-af4c-dd5aec0d5323\") " pod="calico-apiserver/calico-apiserver-77fb7456f4-pgkrc" Dec 13 02:10:13.035407 kubelet[2306]: I1213 02:10:13.035162 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qkj8\" (UniqueName: \"kubernetes.io/projected/80fc8685-542f-4623-8a52-98ad685ebdfb-kube-api-access-8qkj8\") pod \"calico-kube-controllers-758847f549-2wzrz\" (UID: \"80fc8685-542f-4623-8a52-98ad685ebdfb\") " pod="calico-system/calico-kube-controllers-758847f549-2wzrz" Dec 13 02:10:13.035407 kubelet[2306]: I1213 02:10:13.035199 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jl82\" (UniqueName: \"kubernetes.io/projected/77407504-933f-4624-af4c-dd5aec0d5323-kube-api-access-6jl82\") pod \"calico-apiserver-77fb7456f4-pgkrc\" (UID: \"77407504-933f-4624-af4c-dd5aec0d5323\") " pod="calico-apiserver/calico-apiserver-77fb7456f4-pgkrc" Dec 13 02:10:13.035407 kubelet[2306]: I1213 02:10:13.035236 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e610094-b7bd-43b3-a038-f2b1fd75f780-calico-apiserver-certs\") pod \"calico-apiserver-77fb7456f4-8nsvc\" (UID: \"0e610094-b7bd-43b3-a038-f2b1fd75f780\") " pod="calico-apiserver/calico-apiserver-77fb7456f4-8nsvc" Dec 13 02:10:13.035407 kubelet[2306]: I1213 02:10:13.035267 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bwvj\" (UniqueName: \"kubernetes.io/projected/0e610094-b7bd-43b3-a038-f2b1fd75f780-kube-api-access-8bwvj\") pod \"calico-apiserver-77fb7456f4-8nsvc\" (UID: \"0e610094-b7bd-43b3-a038-f2b1fd75f780\") " pod="calico-apiserver/calico-apiserver-77fb7456f4-8nsvc" Dec 13 02:10:13.035407 kubelet[2306]: I1213 02:10:13.035297 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhlww\" (UniqueName: \"kubernetes.io/projected/1f0b368d-f96a-4022-88da-c258681fa6eb-kube-api-access-hhlww\") pod \"coredns-76f75df574-c5mt8\" (UID: \"1f0b368d-f96a-4022-88da-c258681fa6eb\") " pod="kube-system/coredns-76f75df574-c5mt8" Dec 13 02:10:13.035800 kubelet[2306]: I1213 02:10:13.035333 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f0b368d-f96a-4022-88da-c258681fa6eb-config-volume\") pod \"coredns-76f75df574-c5mt8\" (UID: \"1f0b368d-f96a-4022-88da-c258681fa6eb\") " pod="kube-system/coredns-76f75df574-c5mt8" Dec 13 02:10:13.035800 kubelet[2306]: I1213 02:10:13.035371 2306 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80fc8685-542f-4623-8a52-98ad685ebdfb-tigera-ca-bundle\") pod \"calico-kube-controllers-758847f549-2wzrz\" (UID: \"80fc8685-542f-4623-8a52-98ad685ebdfb\") " pod="calico-system/calico-kube-controllers-758847f549-2wzrz" Dec 13 02:10:13.134336 env[1343]: time="2024-12-13T02:10:13.133676705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mc2dz,Uid:58caca33-88e9-4a41-9735-56d04f40c4b1,Namespace:calico-system,Attempt:0,}" Dec 13 02:10:13.234486 env[1343]: time="2024-12-13T02:10:13.234044246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7mq9l,Uid:755a7ddd-d1f9-477d-b8ad-3e9f709e61fd,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:13.235125 env[1343]: time="2024-12-13T02:10:13.234044248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758847f549-2wzrz,Uid:80fc8685-542f-4623-8a52-98ad685ebdfb,Namespace:calico-system,Attempt:0,}" Dec 13 02:10:13.235251 env[1343]: time="2024-12-13T02:10:13.235192268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c5mt8,Uid:1f0b368d-f96a-4022-88da-c258681fa6eb,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:13.259678 env[1343]: time="2024-12-13T02:10:13.259615582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fb7456f4-pgkrc,Uid:77407504-933f-4624-af4c-dd5aec0d5323,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:10:13.262724 env[1343]: time="2024-12-13T02:10:13.262671409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fb7456f4-8nsvc,Uid:0e610094-b7bd-43b3-a038-f2b1fd75f780,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:10:13.921755 env[1343]: time="2024-12-13T02:10:13.921691115Z" level=info msg="shim disconnected" id=46114e0a010f2c615d78dee249183baadb72db876cc206769ef2f0963e069fcd Dec 13 02:10:13.921755 env[1343]: time="2024-12-13T02:10:13.921757236Z" level=warning msg="cleaning up after shim disconnected" id=46114e0a010f2c615d78dee249183baadb72db876cc206769ef2f0963e069fcd namespace=k8s.io Dec 13 02:10:13.922649 env[1343]: time="2024-12-13T02:10:13.921771714Z" level=info msg="cleaning up dead shim" Dec 13 02:10:13.936299 env[1343]: time="2024-12-13T02:10:13.936248775Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3032 runtime=io.containerd.runc.v2\n" Dec 13 02:10:14.286494 env[1343]: time="2024-12-13T02:10:14.286286903Z" level=error msg="Failed to destroy network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.287235 env[1343]: time="2024-12-13T02:10:14.287177117Z" level=error msg="encountered an error cleaning up failed sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.287365 env[1343]: time="2024-12-13T02:10:14.287246813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758847f549-2wzrz,Uid:80fc8685-542f-4623-8a52-98ad685ebdfb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.290542 kubelet[2306]: E1213 02:10:14.287943 2306 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.290542 kubelet[2306]: E1213 02:10:14.288047 2306 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-758847f549-2wzrz" Dec 13 02:10:14.290542 kubelet[2306]: E1213 02:10:14.288112 2306 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-758847f549-2wzrz" Dec 13 02:10:14.291243 kubelet[2306]: E1213 02:10:14.290482 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-758847f549-2wzrz_calico-system(80fc8685-542f-4623-8a52-98ad685ebdfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-758847f549-2wzrz_calico-system(80fc8685-542f-4623-8a52-98ad685ebdfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-758847f549-2wzrz" podUID="80fc8685-542f-4623-8a52-98ad685ebdfb" Dec 13 02:10:14.329925 env[1343]: time="2024-12-13T02:10:14.329813707Z" level=error msg="Failed to destroy network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.330456 env[1343]: time="2024-12-13T02:10:14.330372598Z" level=error msg="encountered an error cleaning up failed sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.330591 env[1343]: time="2024-12-13T02:10:14.330484851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mc2dz,Uid:58caca33-88e9-4a41-9735-56d04f40c4b1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.333649 kubelet[2306]: E1213 02:10:14.330918 2306 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.333649 kubelet[2306]: E1213 02:10:14.331066 2306 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mc2dz" Dec 13 02:10:14.333649 kubelet[2306]: E1213 02:10:14.331124 2306 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mc2dz" Dec 13 02:10:14.333922 kubelet[2306]: E1213 02:10:14.333412 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mc2dz_calico-system(58caca33-88e9-4a41-9735-56d04f40c4b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mc2dz_calico-system(58caca33-88e9-4a41-9735-56d04f40c4b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mc2dz" podUID="58caca33-88e9-4a41-9735-56d04f40c4b1" Dec 13 02:10:14.340269 env[1343]: time="2024-12-13T02:10:14.340195798Z" level=error msg="Failed to destroy network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.340762 env[1343]: time="2024-12-13T02:10:14.340709239Z" level=error msg="encountered an error cleaning up failed sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.340888 env[1343]: time="2024-12-13T02:10:14.340785204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c5mt8,Uid:1f0b368d-f96a-4022-88da-c258681fa6eb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.344795 kubelet[2306]: E1213 02:10:14.341161 2306 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.344795 kubelet[2306]: E1213 02:10:14.341321 2306 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-c5mt8" Dec 13 02:10:14.344795 kubelet[2306]: E1213 02:10:14.341372 2306 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-c5mt8" Dec 13 02:10:14.345046 kubelet[2306]: E1213 02:10:14.341503 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-c5mt8_kube-system(1f0b368d-f96a-4022-88da-c258681fa6eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-c5mt8_kube-system(1f0b368d-f96a-4022-88da-c258681fa6eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-c5mt8" podUID="1f0b368d-f96a-4022-88da-c258681fa6eb" Dec 13 02:10:14.375122 env[1343]: time="2024-12-13T02:10:14.375026529Z" level=error msg="Failed to destroy network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.375560 env[1343]: time="2024-12-13T02:10:14.375505654Z" level=error msg="encountered an error cleaning up failed sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.375690 env[1343]: time="2024-12-13T02:10:14.375588011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fb7456f4-pgkrc,Uid:77407504-933f-4624-af4c-dd5aec0d5323,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.376020 kubelet[2306]: E1213 02:10:14.375890 2306 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.376020 kubelet[2306]: E1213 02:10:14.375964 2306 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fb7456f4-pgkrc" Dec 13 02:10:14.376020 kubelet[2306]: E1213 02:10:14.375996 2306 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fb7456f4-pgkrc" Dec 13 02:10:14.376273 kubelet[2306]: E1213 02:10:14.376073 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77fb7456f4-pgkrc_calico-apiserver(77407504-933f-4624-af4c-dd5aec0d5323)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77fb7456f4-pgkrc_calico-apiserver(77407504-933f-4624-af4c-dd5aec0d5323)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fb7456f4-pgkrc" podUID="77407504-933f-4624-af4c-dd5aec0d5323" Dec 13 02:10:14.388885 env[1343]: time="2024-12-13T02:10:14.388798420Z" level=error msg="Failed to destroy network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.389448 env[1343]: time="2024-12-13T02:10:14.389374379Z" level=error msg="encountered an error cleaning up failed sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.389686 env[1343]: time="2024-12-13T02:10:14.389478591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7mq9l,Uid:755a7ddd-d1f9-477d-b8ad-3e9f709e61fd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.389888 kubelet[2306]: E1213 02:10:14.389865 2306 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.390003 kubelet[2306]: E1213 02:10:14.389932 2306 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7mq9l" Dec 13 02:10:14.390003 kubelet[2306]: E1213 02:10:14.389980 2306 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7mq9l" Dec 13 02:10:14.390124 kubelet[2306]: E1213 02:10:14.390052 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7mq9l_kube-system(755a7ddd-d1f9-477d-b8ad-3e9f709e61fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7mq9l_kube-system(755a7ddd-d1f9-477d-b8ad-3e9f709e61fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7mq9l" podUID="755a7ddd-d1f9-477d-b8ad-3e9f709e61fd" Dec 13 02:10:14.392215 env[1343]: time="2024-12-13T02:10:14.391424274Z" level=error msg="Failed to destroy network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.392215 env[1343]: time="2024-12-13T02:10:14.391967958Z" level=error msg="encountered an error cleaning up failed sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.392215 env[1343]: time="2024-12-13T02:10:14.392039003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fb7456f4-8nsvc,Uid:0e610094-b7bd-43b3-a038-f2b1fd75f780,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.392622 kubelet[2306]: E1213 02:10:14.392370 2306 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.392720 kubelet[2306]: E1213 02:10:14.392679 2306 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fb7456f4-8nsvc" Dec 13 02:10:14.392720 kubelet[2306]: E1213 02:10:14.392717 2306 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77fb7456f4-8nsvc" Dec 13 02:10:14.392923 kubelet[2306]: E1213 02:10:14.392787 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77fb7456f4-8nsvc_calico-apiserver(0e610094-b7bd-43b3-a038-f2b1fd75f780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77fb7456f4-8nsvc_calico-apiserver(0e610094-b7bd-43b3-a038-f2b1fd75f780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fb7456f4-8nsvc" podUID="0e610094-b7bd-43b3-a038-f2b1fd75f780" Dec 13 02:10:14.514421 kubelet[2306]: I1213 02:10:14.514267 2306 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:14.517211 env[1343]: time="2024-12-13T02:10:14.517153461Z" level=info msg="StopPodSandbox for \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\"" Dec 13 02:10:14.525737 env[1343]: time="2024-12-13T02:10:14.525527659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 02:10:14.539590 kubelet[2306]: I1213 02:10:14.535574 2306 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:14.539775 env[1343]: time="2024-12-13T02:10:14.538623422Z" level=info msg="StopPodSandbox for \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\"" Dec 13 02:10:14.544415 kubelet[2306]: I1213 02:10:14.544360 2306 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:14.545497 env[1343]: time="2024-12-13T02:10:14.545455008Z" level=info msg="StopPodSandbox for \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\"" Dec 13 02:10:14.551022 kubelet[2306]: I1213 02:10:14.550361 2306 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:14.553944 env[1343]: time="2024-12-13T02:10:14.553893852Z" level=info msg="StopPodSandbox for \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\"" Dec 13 02:10:14.556820 kubelet[2306]: I1213 02:10:14.556793 2306 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:14.558554 env[1343]: time="2024-12-13T02:10:14.558509031Z" level=info msg="StopPodSandbox for \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\"" Dec 13 02:10:14.559899 kubelet[2306]: I1213 02:10:14.559604 2306 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:14.561304 env[1343]: time="2024-12-13T02:10:14.560645027Z" level=info msg="StopPodSandbox for \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\"" Dec 13 02:10:14.690706 env[1343]: time="2024-12-13T02:10:14.690625426Z" level=error msg="StopPodSandbox for \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\" failed" error="failed to destroy network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.691412 kubelet[2306]: E1213 02:10:14.691219 2306 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:14.691412 kubelet[2306]: E1213 02:10:14.691329 2306 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf"} Dec 13 02:10:14.691725 kubelet[2306]: E1213 02:10:14.691626 2306 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f0b368d-f96a-4022-88da-c258681fa6eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:14.691725 kubelet[2306]: E1213 02:10:14.691687 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f0b368d-f96a-4022-88da-c258681fa6eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-c5mt8" podUID="1f0b368d-f96a-4022-88da-c258681fa6eb" Dec 13 02:10:14.728584 env[1343]: time="2024-12-13T02:10:14.728505833Z" level=error msg="StopPodSandbox for \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\" failed" error="failed to destroy network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.728873 kubelet[2306]: E1213 02:10:14.728843 2306 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:14.729006 kubelet[2306]: E1213 02:10:14.728907 2306 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789"} Dec 13 02:10:14.729006 kubelet[2306]: E1213 02:10:14.728968 2306 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"755a7ddd-d1f9-477d-b8ad-3e9f709e61fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:14.729204 kubelet[2306]: E1213 02:10:14.729013 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"755a7ddd-d1f9-477d-b8ad-3e9f709e61fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7mq9l" podUID="755a7ddd-d1f9-477d-b8ad-3e9f709e61fd" Dec 13 02:10:14.730167 env[1343]: time="2024-12-13T02:10:14.730105316Z" level=error msg="StopPodSandbox for \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\" failed" error="failed to destroy network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.730466 kubelet[2306]: E1213 02:10:14.730414 2306 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:14.730466 kubelet[2306]: E1213 02:10:14.730456 2306 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5"} Dec 13 02:10:14.730638 kubelet[2306]: E1213 02:10:14.730506 2306 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58caca33-88e9-4a41-9735-56d04f40c4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:14.730638 kubelet[2306]: E1213 02:10:14.730547 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58caca33-88e9-4a41-9735-56d04f40c4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mc2dz" podUID="58caca33-88e9-4a41-9735-56d04f40c4b1" Dec 13 02:10:14.741667 env[1343]: time="2024-12-13T02:10:14.741567492Z" level=error msg="StopPodSandbox for \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\" failed" error="failed to destroy network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.741887 kubelet[2306]: E1213 02:10:14.741861 2306 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:14.742025 kubelet[2306]: E1213 02:10:14.741912 2306 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049"} Dec 13 02:10:14.742025 kubelet[2306]: E1213 02:10:14.741973 2306 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80fc8685-542f-4623-8a52-98ad685ebdfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:14.742025 kubelet[2306]: E1213 02:10:14.742018 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80fc8685-542f-4623-8a52-98ad685ebdfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-758847f549-2wzrz" podUID="80fc8685-542f-4623-8a52-98ad685ebdfb" Dec 13 02:10:14.756463 env[1343]: time="2024-12-13T02:10:14.756373490Z" level=error msg="StopPodSandbox for \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\" failed" error="failed to destroy network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.756777 kubelet[2306]: E1213 02:10:14.756696 2306 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:14.756777 kubelet[2306]: E1213 02:10:14.756758 2306 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f"} Dec 13 02:10:14.756952 kubelet[2306]: E1213 02:10:14.756816 2306 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e610094-b7bd-43b3-a038-f2b1fd75f780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:14.756952 kubelet[2306]: E1213 02:10:14.756863 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e610094-b7bd-43b3-a038-f2b1fd75f780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fb7456f4-8nsvc" podUID="0e610094-b7bd-43b3-a038-f2b1fd75f780" Dec 13 02:10:14.761240 env[1343]: time="2024-12-13T02:10:14.761170726Z" level=error msg="StopPodSandbox for \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\" failed" error="failed to destroy network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:14.761653 kubelet[2306]: E1213 02:10:14.761610 2306 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:14.761778 kubelet[2306]: E1213 02:10:14.761682 2306 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8"} Dec 13 02:10:14.761778 kubelet[2306]: E1213 02:10:14.761742 2306 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77407504-933f-4624-af4c-dd5aec0d5323\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:14.761939 kubelet[2306]: E1213 02:10:14.761791 2306 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77407504-933f-4624-af4c-dd5aec0d5323\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77fb7456f4-pgkrc" podUID="77407504-933f-4624-af4c-dd5aec0d5323" Dec 13 02:10:14.874293 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789-shm.mount: Deactivated successfully. Dec 13 02:10:14.875105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049-shm.mount: Deactivated successfully. Dec 13 02:10:14.875290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5-shm.mount: Deactivated successfully. Dec 13 02:10:22.503790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358016853.mount: Deactivated successfully. Dec 13 02:10:22.546315 env[1343]: time="2024-12-13T02:10:22.546245300Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:22.549291 env[1343]: time="2024-12-13T02:10:22.549245259Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:22.551498 env[1343]: time="2024-12-13T02:10:22.551447314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:22.553710 env[1343]: time="2024-12-13T02:10:22.553660794Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:22.554580 env[1343]: time="2024-12-13T02:10:22.554532397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 02:10:22.580140 env[1343]: time="2024-12-13T02:10:22.579643669Z" level=info msg="CreateContainer within sandbox \"d5d81fc9ac869d0b1ea49114dff4d6298c16cbb95d1f7f4a78c34b9214c0748b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:10:22.610838 env[1343]: time="2024-12-13T02:10:22.610760226Z" level=info msg="CreateContainer within sandbox \"d5d81fc9ac869d0b1ea49114dff4d6298c16cbb95d1f7f4a78c34b9214c0748b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9a1f513ad2547062f552f99f115ea44e10899cddd274780d466c01cc5f13aecc\"" Dec 13 02:10:22.613081 env[1343]: time="2024-12-13T02:10:22.613014094Z" level=info msg="StartContainer for \"9a1f513ad2547062f552f99f115ea44e10899cddd274780d466c01cc5f13aecc\"" Dec 13 02:10:22.692730 env[1343]: time="2024-12-13T02:10:22.692663009Z" level=info msg="StartContainer for \"9a1f513ad2547062f552f99f115ea44e10899cddd274780d466c01cc5f13aecc\" returns successfully" Dec 13 02:10:22.811872 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 02:10:22.812062 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 02:10:23.613537 kubelet[2306]: I1213 02:10:23.611455 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-xnljn" podStartSLOduration=2.379219543 podStartE2EDuration="21.611378399s" podCreationTimestamp="2024-12-13 02:10:02 +0000 UTC" firstStartedPulling="2024-12-13 02:10:03.322760778 +0000 UTC m=+21.487055247" lastFinishedPulling="2024-12-13 02:10:22.554919618 +0000 UTC m=+40.719214103" observedRunningTime="2024-12-13 02:10:23.609763262 +0000 UTC m=+41.774057763" watchObservedRunningTime="2024-12-13 02:10:23.611378399 +0000 UTC m=+41.775672890" Dec 13 02:10:23.631151 systemd[1]: run-containerd-runc-k8s.io-9a1f513ad2547062f552f99f115ea44e10899cddd274780d466c01cc5f13aecc-runc.fmNy5M.mount: Deactivated successfully. Dec 13 02:10:24.164481 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:10:24.164659 kernel: audit: type=1400 audit(1734055824.154:287): avc: denied { write } for pid=3475 comm="tee" name="fd" dev="proc" ino=24814 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:10:24.154000 audit[3475]: AVC avc: denied { write } for pid=3475 comm="tee" name="fd" dev="proc" ino=24814 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:10:24.215645 kernel: audit: type=1300 audit(1734055824.154:287): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb6b2b9b5 a2=241 a3=1b6 items=1 ppid=3452 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:24.154000 audit[3475]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb6b2b9b5 a2=241 a3=1b6 items=1 ppid=3452 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:24.154000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 02:10:24.250627 kernel: audit: type=1307 audit(1734055824.154:287): cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 02:10:24.250773 kernel: audit: type=1302 audit(1734055824.154:287): item=0 name="/dev/fd/63" inode=24804 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:10:24.154000 audit: PATH item=0 name="/dev/fd/63" inode=24804 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:10:24.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:10:24.271418 kernel: audit: type=1327 audit(1734055824.154:287): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:10:24.324000 audit[3507]: AVC avc: denied { write } for pid=3507 comm="tee" name="fd" dev="proc" ino=24371 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:10:24.348412 kernel: audit: type=1400 audit(1734055824.324:288): avc: denied { write } for pid=3507 comm="tee" name="fd" dev="proc" ino=24371 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:10:24.324000 audit[3507]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff40ed39b4 a2=241 a3=1b6 items=1 ppid=3461 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:24.381429 kernel: audit: type=1300 audit(1734055824.324:288): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff40ed39b4 a2=241 a3=1b6 items=1 ppid=3461 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:24.324000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 02:10:24.414947 kernel: audit: type=1307 audit(1734055824.324:288): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 02:10:24.415110 kernel: audit: type=1302 audit(1734055824.324:288): item=0 name="/dev/fd/63" inode=24362 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:10:24.324000 audit: PATH item=0 name="/dev/fd/63" inode=24362 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:10:24.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:10:24.437432 kernel: audit: type=1327 audit(1734055824.324:288): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:10:24.362000 audit[3511]: AVC avc: denied { write } for pid=3511 comm="tee" name="fd" dev="proc" ino=24379 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:10:24.362000 audit[3511]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcdaf739c6 a2=241 a3=1b6 items=1 ppid=3449 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:24.362000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 02:10:24.362000 audit: PATH item=0 name="/dev/fd/63" inode=24365 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:10:24.362000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:10:24.365000 audit[3517]: AVC avc: denied { write } for pid=3517 comm="tee" name="fd" dev="proc" ino=24381 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:10:24.365000 audit[3517]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcc8a149c4 a2=241 a3=1b6 items=1 ppid=3451 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:24.365000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 02:10:24.365000 audit: PATH item=0 name="/dev/fd/63" inode=24821 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:10:24.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:10:24.440000 audit[3521]: AVC avc: denied { write } for pid=3521 comm="tee" name="fd" dev="proc" ino=24395 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:10:24.440000 audit[3521]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffee9a8f9c5 a2=241 a3=1b6 items=1 ppid=3457 pid=3521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:24.440000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 02:10:24.440000 audit: PATH item=0 name="/dev/fd/63" inode=24368 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:10:24.440000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:10:24.466000 audit[3524]: AVC avc: denied { write } for pid=3524 comm="tee" name="fd" dev="proc" ino=24399 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:10:24.466000 audit[3524]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd1e0dc9c4 a2=241 a3=1b6 items=1 ppid=3464 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:24.466000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 02:10:24.466000 audit: PATH item=0 name="/dev/fd/63" inode=24383 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:10:24.466000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:10:24.483000 audit[3526]: AVC avc: denied { write } for pid=3526 comm="tee" name="fd" dev="proc" ino=24832 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 02:10:24.483000 audit[3526]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4c9c39c4 a2=241 a3=1b6 items=1 ppid=3455 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:24.483000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 02:10:24.483000 audit: PATH item=0 name="/dev/fd/63" inode=24824 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:10:24.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 02:10:24.618893 systemd[1]: run-containerd-runc-k8s.io-9a1f513ad2547062f552f99f115ea44e10899cddd274780d466c01cc5f13aecc-runc.2k6Dvn.mount: Deactivated successfully. Dec 13 02:10:25.131185 env[1343]: time="2024-12-13T02:10:25.131127861Z" level=info msg="StopPodSandbox for \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\"" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.190 [INFO][3567] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.190 [INFO][3567] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" iface="eth0" netns="/var/run/netns/cni-2a5acf3e-d0a7-19e3-5f93-43ac1a370156" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.190 [INFO][3567] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" iface="eth0" netns="/var/run/netns/cni-2a5acf3e-d0a7-19e3-5f93-43ac1a370156" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.190 [INFO][3567] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" iface="eth0" netns="/var/run/netns/cni-2a5acf3e-d0a7-19e3-5f93-43ac1a370156" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.191 [INFO][3567] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.191 [INFO][3567] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.219 [INFO][3574] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" HandleID="k8s-pod-network.5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.220 [INFO][3574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.220 [INFO][3574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.229 [WARNING][3574] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" HandleID="k8s-pod-network.5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.229 [INFO][3574] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" HandleID="k8s-pod-network.5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.231 [INFO][3574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:25.235129 env[1343]: 2024-12-13 02:10:25.233 [INFO][3567] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:25.242141 systemd[1]: run-netns-cni\x2d2a5acf3e\x2dd0a7\x2d19e3\x2d5f93\x2d43ac1a370156.mount: Deactivated successfully. Dec 13 02:10:25.242709 env[1343]: time="2024-12-13T02:10:25.242129282Z" level=info msg="TearDown network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\" successfully" Dec 13 02:10:25.242854 env[1343]: time="2024-12-13T02:10:25.242823480Z" level=info msg="StopPodSandbox for \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\" returns successfully" Dec 13 02:10:25.245356 env[1343]: time="2024-12-13T02:10:25.245310408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fb7456f4-8nsvc,Uid:0e610094-b7bd-43b3-a038-f2b1fd75f780,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:10:25.448089 systemd-networkd[1083]: cali7c5d39de0e3: Link UP Dec 13 02:10:25.464730 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:10:25.464933 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7c5d39de0e3: link becomes ready Dec 13 02:10:25.466875 systemd-networkd[1083]: cali7c5d39de0e3: Gained carrier Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.311 [INFO][3581] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.327 [INFO][3581] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0 calico-apiserver-77fb7456f4- calico-apiserver 0e610094-b7bd-43b3-a038-f2b1fd75f780 768 0 2024-12-13 02:10:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77fb7456f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal calico-apiserver-77fb7456f4-8nsvc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7c5d39de0e3 [] []}} ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-8nsvc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.327 [INFO][3581] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-8nsvc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.379 [INFO][3598] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" HandleID="k8s-pod-network.e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.399 [INFO][3598] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" HandleID="k8s-pod-network.e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003107f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", "pod":"calico-apiserver-77fb7456f4-8nsvc", "timestamp":"2024-12-13 02:10:25.379300226 +0000 UTC"}, Hostname:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.399 [INFO][3598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.399 [INFO][3598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.399 [INFO][3598] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.402 [INFO][3598] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.406 [INFO][3598] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.412 [INFO][3598] ipam/ipam.go 489: Trying affinity for 192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.414 [INFO][3598] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.417 [INFO][3598] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.417 [INFO][3598] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.419 [INFO][3598] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.424 [INFO][3598] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.432 [INFO][3598] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.129/26] block=192.168.89.128/26 handle="k8s-pod-network.e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.432 [INFO][3598] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.129/26] handle="k8s-pod-network.e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.432 [INFO][3598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:25.497537 env[1343]: 2024-12-13 02:10:25.432 [INFO][3598] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.129/26] IPv6=[] ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" HandleID="k8s-pod-network.e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.498748 env[1343]: 2024-12-13 02:10:25.434 [INFO][3581] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-8nsvc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0", GenerateName:"calico-apiserver-77fb7456f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e610094-b7bd-43b3-a038-f2b1fd75f780", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fb7456f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-77fb7456f4-8nsvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c5d39de0e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:25.498748 env[1343]: 2024-12-13 02:10:25.435 [INFO][3581] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.129/32] ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-8nsvc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.498748 env[1343]: 2024-12-13 02:10:25.435 [INFO][3581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c5d39de0e3 ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-8nsvc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.498748 env[1343]: 2024-12-13 02:10:25.472 [INFO][3581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-8nsvc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.498748 env[1343]: 2024-12-13 02:10:25.473 [INFO][3581] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-8nsvc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0", GenerateName:"calico-apiserver-77fb7456f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e610094-b7bd-43b3-a038-f2b1fd75f780", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fb7456f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e", Pod:"calico-apiserver-77fb7456f4-8nsvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c5d39de0e3", MAC:"ba:29:77:24:fd:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:25.498748 env[1343]: 2024-12-13 02:10:25.486 [INFO][3581] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-8nsvc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:25.526229 env[1343]: time="2024-12-13T02:10:25.525918368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:25.526229 env[1343]: time="2024-12-13T02:10:25.525975839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:25.526229 env[1343]: time="2024-12-13T02:10:25.525996389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:25.526799 env[1343]: time="2024-12-13T02:10:25.526705998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e pid=3629 runtime=io.containerd.runc.v2 Dec 13 02:10:25.670426 env[1343]: time="2024-12-13T02:10:25.670341725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fb7456f4-8nsvc,Uid:0e610094-b7bd-43b3-a038-f2b1fd75f780,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e\"" Dec 13 02:10:25.674900 env[1343]: time="2024-12-13T02:10:25.674855454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:10:26.133423 env[1343]: time="2024-12-13T02:10:26.133343537Z" level=info msg="StopPodSandbox for \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\"" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.197 [INFO][3683] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.197 [INFO][3683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" iface="eth0" netns="/var/run/netns/cni-6cd6c5f7-77e1-2322-326d-3ed9e567f81e" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.197 [INFO][3683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" iface="eth0" netns="/var/run/netns/cni-6cd6c5f7-77e1-2322-326d-3ed9e567f81e" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.198 [INFO][3683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" iface="eth0" netns="/var/run/netns/cni-6cd6c5f7-77e1-2322-326d-3ed9e567f81e" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.198 [INFO][3683] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.198 [INFO][3683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.229 [INFO][3690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" HandleID="k8s-pod-network.7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.229 [INFO][3690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.230 [INFO][3690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.237 [WARNING][3690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" HandleID="k8s-pod-network.7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.237 [INFO][3690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" HandleID="k8s-pod-network.7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.238 [INFO][3690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:26.240959 env[1343]: 2024-12-13 02:10:26.239 [INFO][3683] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:26.247947 env[1343]: time="2024-12-13T02:10:26.247202156Z" level=info msg="TearDown network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\" successfully" Dec 13 02:10:26.247947 env[1343]: time="2024-12-13T02:10:26.247252436Z" level=info msg="StopPodSandbox for \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\" returns successfully" Dec 13 02:10:26.245837 systemd[1]: run-netns-cni\x2d6cd6c5f7\x2d77e1\x2d2322\x2d326d\x2d3ed9e567f81e.mount: Deactivated successfully. Dec 13 02:10:26.248746 env[1343]: time="2024-12-13T02:10:26.248707871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7mq9l,Uid:755a7ddd-d1f9-477d-b8ad-3e9f709e61fd,Namespace:kube-system,Attempt:1,}" Dec 13 02:10:26.428180 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8652c255186: link becomes ready Dec 13 02:10:26.426576 systemd-networkd[1083]: cali8652c255186: Link UP Dec 13 02:10:26.429601 systemd-networkd[1083]: cali8652c255186: Gained carrier Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.311 [INFO][3697] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.324 [INFO][3697] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0 coredns-76f75df574- kube-system 755a7ddd-d1f9-477d-b8ad-3e9f709e61fd 777 0 2024-12-13 02:09:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal coredns-76f75df574-7mq9l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8652c255186 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Namespace="kube-system" Pod="coredns-76f75df574-7mq9l" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.324 [INFO][3697] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Namespace="kube-system" Pod="coredns-76f75df574-7mq9l" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.359 [INFO][3709] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" HandleID="k8s-pod-network.887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.368 [INFO][3709] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" HandleID="k8s-pod-network.887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", "pod":"coredns-76f75df574-7mq9l", "timestamp":"2024-12-13 02:10:26.359755798 +0000 UTC"}, Hostname:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.369 [INFO][3709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.369 [INFO][3709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.369 [INFO][3709] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.370 [INFO][3709] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.375 [INFO][3709] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.380 [INFO][3709] ipam/ipam.go 489: Trying affinity for 192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.383 [INFO][3709] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.385 [INFO][3709] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.385 [INFO][3709] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.387 [INFO][3709] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120 Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.393 [INFO][3709] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.401 [INFO][3709] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.130/26] block=192.168.89.128/26 handle="k8s-pod-network.887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.401 [INFO][3709] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.130/26] handle="k8s-pod-network.887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.401 [INFO][3709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:26.457536 env[1343]: 2024-12-13 02:10:26.401 [INFO][3709] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.130/26] IPv6=[] ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" HandleID="k8s-pod-network.887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.459082 env[1343]: 2024-12-13 02:10:26.404 [INFO][3697] cni-plugin/k8s.go 386: Populated endpoint ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Namespace="kube-system" Pod="coredns-76f75df574-7mq9l" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"755a7ddd-d1f9-477d-b8ad-3e9f709e61fd", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-7mq9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8652c255186", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:26.459082 env[1343]: 2024-12-13 02:10:26.404 [INFO][3697] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.130/32] ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Namespace="kube-system" Pod="coredns-76f75df574-7mq9l" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.459082 env[1343]: 2024-12-13 02:10:26.404 [INFO][3697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8652c255186 ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Namespace="kube-system" Pod="coredns-76f75df574-7mq9l" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.459082 env[1343]: 2024-12-13 02:10:26.433 [INFO][3697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Namespace="kube-system" Pod="coredns-76f75df574-7mq9l" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.459082 env[1343]: 2024-12-13 02:10:26.434 [INFO][3697] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Namespace="kube-system" Pod="coredns-76f75df574-7mq9l" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"755a7ddd-d1f9-477d-b8ad-3e9f709e61fd", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120", Pod:"coredns-76f75df574-7mq9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8652c255186", MAC:"b6:9e:80:a3:10:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:26.459082 env[1343]: 2024-12-13 02:10:26.452 [INFO][3697] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120" Namespace="kube-system" Pod="coredns-76f75df574-7mq9l" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:26.530639 env[1343]: time="2024-12-13T02:10:26.529548686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:26.530639 env[1343]: time="2024-12-13T02:10:26.529652467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:26.530639 env[1343]: time="2024-12-13T02:10:26.529699809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:26.530639 env[1343]: time="2024-12-13T02:10:26.530002333Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120 pid=3739 runtime=io.containerd.runc.v2 Dec 13 02:10:26.682754 env[1343]: time="2024-12-13T02:10:26.680711758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7mq9l,Uid:755a7ddd-d1f9-477d-b8ad-3e9f709e61fd,Namespace:kube-system,Attempt:1,} returns sandbox id \"887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120\"" Dec 13 02:10:26.689795 env[1343]: time="2024-12-13T02:10:26.689745352Z" level=info msg="CreateContainer within sandbox \"887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:10:26.761570 env[1343]: time="2024-12-13T02:10:26.761493908Z" level=info msg="CreateContainer within sandbox \"887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3019ee77e1ee1256e6d1f555ccc46117dbcd29e491d54b4b66eddd276707f58d\"" Dec 13 02:10:26.764410 env[1343]: time="2024-12-13T02:10:26.764355637Z" level=info msg="StartContainer for \"3019ee77e1ee1256e6d1f555ccc46117dbcd29e491d54b4b66eddd276707f58d\"" Dec 13 02:10:26.769426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458500798.mount: Deactivated successfully. Dec 13 02:10:26.899584 env[1343]: time="2024-12-13T02:10:26.899526100Z" level=info msg="StartContainer for \"3019ee77e1ee1256e6d1f555ccc46117dbcd29e491d54b4b66eddd276707f58d\" returns successfully" Dec 13 02:10:27.159668 systemd-networkd[1083]: cali7c5d39de0e3: Gained IPv6LL Dec 13 02:10:27.478664 systemd-networkd[1083]: cali8652c255186: Gained IPv6LL Dec 13 02:10:27.670787 kubelet[2306]: I1213 02:10:27.668503 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7mq9l" podStartSLOduration=31.668446864 podStartE2EDuration="31.668446864s" podCreationTimestamp="2024-12-13 02:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:27.667978783 +0000 UTC m=+45.832273270" watchObservedRunningTime="2024-12-13 02:10:27.668446864 +0000 UTC m=+45.832741355" Dec 13 02:10:27.733000 audit[3830]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=3830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:27.733000 audit[3830]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffce38a0ad0 a2=0 a3=7ffce38a0abc items=0 ppid=2502 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:27.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:27.738000 audit[3830]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:27.738000 audit[3830]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce38a0ad0 a2=0 a3=0 items=0 ppid=2502 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:27.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:27.749000 audit[3832]: NETFILTER_CFG table=filter:97 family=2 entries=15 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:27.749000 audit[3832]: SYSCALL arch=c000003e syscall=46 success=yes exit=4420 a0=3 a1=7ffc3c4a9bd0 a2=0 a3=7ffc3c4a9bbc items=0 ppid=2502 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:27.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:27.755000 audit[3832]: NETFILTER_CFG table=nat:98 family=2 entries=33 op=nft_register_chain pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:27.755000 audit[3832]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffc3c4a9bd0 a2=0 a3=7ffc3c4a9bbc items=0 ppid=2502 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:27.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:28.137198 env[1343]: time="2024-12-13T02:10:28.137138704Z" level=info msg="StopPodSandbox for \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\"" Dec 13 02:10:28.150089 env[1343]: time="2024-12-13T02:10:28.150014540Z" level=info msg="StopPodSandbox for \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\"" Dec 13 02:10:28.156901 env[1343]: time="2024-12-13T02:10:28.156844387Z" level=info msg="StopPodSandbox for \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\"" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.313 [INFO][3891] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.313 [INFO][3891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" iface="eth0" netns="/var/run/netns/cni-3a311d19-a621-84cd-ef6c-6733c089453a" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.316 [INFO][3891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" iface="eth0" netns="/var/run/netns/cni-3a311d19-a621-84cd-ef6c-6733c089453a" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.317 [INFO][3891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" iface="eth0" netns="/var/run/netns/cni-3a311d19-a621-84cd-ef6c-6733c089453a" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.317 [INFO][3891] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.317 [INFO][3891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.410 [INFO][3911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" HandleID="k8s-pod-network.c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.410 [INFO][3911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.410 [INFO][3911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.419 [WARNING][3911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" HandleID="k8s-pod-network.c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.419 [INFO][3911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" HandleID="k8s-pod-network.c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.421 [INFO][3911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:28.429222 env[1343]: 2024-12-13 02:10:28.427 [INFO][3891] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:28.439776 systemd[1]: run-netns-cni\x2d3a311d19\x2da621\x2d84cd\x2def6c\x2d6733c089453a.mount: Deactivated successfully. Dec 13 02:10:28.442744 env[1343]: time="2024-12-13T02:10:28.442664027Z" level=info msg="TearDown network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\" successfully" Dec 13 02:10:28.442744 env[1343]: time="2024-12-13T02:10:28.442729028Z" level=info msg="StopPodSandbox for \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\" returns successfully" Dec 13 02:10:28.444240 env[1343]: time="2024-12-13T02:10:28.443855531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mc2dz,Uid:58caca33-88e9-4a41-9735-56d04f40c4b1,Namespace:calico-system,Attempt:1,}" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.321 [INFO][3890] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.321 [INFO][3890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" iface="eth0" netns="/var/run/netns/cni-46010544-cbd0-229d-aeaa-9c91b23b9cc9" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.321 [INFO][3890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" iface="eth0" netns="/var/run/netns/cni-46010544-cbd0-229d-aeaa-9c91b23b9cc9" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.325 [INFO][3890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" iface="eth0" netns="/var/run/netns/cni-46010544-cbd0-229d-aeaa-9c91b23b9cc9" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.325 [INFO][3890] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.325 [INFO][3890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.441 [INFO][3912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" HandleID="k8s-pod-network.442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.442 [INFO][3912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.443 [INFO][3912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.452 [WARNING][3912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" HandleID="k8s-pod-network.442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.453 [INFO][3912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" HandleID="k8s-pod-network.442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.474 [INFO][3912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:28.480067 env[1343]: 2024-12-13 02:10:28.476 [INFO][3890] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:28.484355 env[1343]: time="2024-12-13T02:10:28.484295100Z" level=info msg="TearDown network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\" successfully" Dec 13 02:10:28.484594 env[1343]: time="2024-12-13T02:10:28.484563556Z" level=info msg="StopPodSandbox for \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\" returns successfully" Dec 13 02:10:28.488824 systemd[1]: run-netns-cni\x2d46010544\x2dcbd0\x2d229d\x2daeaa\x2d9c91b23b9cc9.mount: Deactivated successfully. Dec 13 02:10:28.497708 env[1343]: time="2024-12-13T02:10:28.497648955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fb7456f4-pgkrc,Uid:77407504-933f-4624-af4c-dd5aec0d5323,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.407 [INFO][3904] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.407 [INFO][3904] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" iface="eth0" netns="/var/run/netns/cni-e366efa1-8e11-beff-8b6f-38d35eadeb5c" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.407 [INFO][3904] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" iface="eth0" netns="/var/run/netns/cni-e366efa1-8e11-beff-8b6f-38d35eadeb5c" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.408 [INFO][3904] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" iface="eth0" netns="/var/run/netns/cni-e366efa1-8e11-beff-8b6f-38d35eadeb5c" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.408 [INFO][3904] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.408 [INFO][3904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.580 [INFO][3923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" HandleID="k8s-pod-network.3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.595 [INFO][3923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.596 [INFO][3923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.625 [WARNING][3923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" HandleID="k8s-pod-network.3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.625 [INFO][3923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" HandleID="k8s-pod-network.3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.630 [INFO][3923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:28.635209 env[1343]: 2024-12-13 02:10:28.633 [INFO][3904] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:28.641714 env[1343]: time="2024-12-13T02:10:28.641637950Z" level=info msg="TearDown network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\" successfully" Dec 13 02:10:28.641974 env[1343]: time="2024-12-13T02:10:28.641937570Z" level=info msg="StopPodSandbox for \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\" returns successfully" Dec 13 02:10:28.645568 systemd[1]: run-netns-cni\x2de366efa1\x2d8e11\x2dbeff\x2d8b6f\x2d38d35eadeb5c.mount: Deactivated successfully. Dec 13 02:10:28.649793 env[1343]: time="2024-12-13T02:10:28.649718083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758847f549-2wzrz,Uid:80fc8685-542f-4623-8a52-98ad685ebdfb,Namespace:calico-system,Attempt:1,}" Dec 13 02:10:28.811427 kubelet[2306]: I1213 02:10:28.810440 2306 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:10:28.875000 audit[3987]: NETFILTER_CFG table=filter:99 family=2 entries=11 op=nft_register_rule pid=3987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:28.875000 audit[3987]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc54a134f0 a2=0 a3=7ffc54a134dc items=0 ppid=2502 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:28.875000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:28.889908 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:10:28.886280 systemd-networkd[1083]: cali69c1f11a324: Link UP Dec 13 02:10:28.899418 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali69c1f11a324: link becomes ready Dec 13 02:10:28.911000 audit[3987]: NETFILTER_CFG table=nat:100 family=2 entries=25 op=nft_register_chain pid=3987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:28.911000 audit[3987]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffc54a134f0 a2=0 a3=7ffc54a134dc items=0 ppid=2502 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:28.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:28.910862 systemd-networkd[1083]: cali69c1f11a324: Gained carrier Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.559 [INFO][3940] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.581 [INFO][3940] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0 calico-apiserver-77fb7456f4- calico-apiserver 77407504-933f-4624-af4c-dd5aec0d5323 799 0 2024-12-13 02:10:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77fb7456f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal calico-apiserver-77fb7456f4-pgkrc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali69c1f11a324 [] []}} ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-pgkrc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.581 [INFO][3940] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-pgkrc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.719 [INFO][3953] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" HandleID="k8s-pod-network.0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.775 [INFO][3953] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" HandleID="k8s-pod-network.0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", "pod":"calico-apiserver-77fb7456f4-pgkrc", "timestamp":"2024-12-13 02:10:28.719860539 +0000 UTC"}, Hostname:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.776 [INFO][3953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.776 [INFO][3953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.776 [INFO][3953] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.779 [INFO][3953] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.796 [INFO][3953] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.807 [INFO][3953] ipam/ipam.go 489: Trying affinity for 192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.818 [INFO][3953] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.836 [INFO][3953] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.836 [INFO][3953] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.845 [INFO][3953] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1 Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.862 [INFO][3953] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.875 [INFO][3953] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.131/26] block=192.168.89.128/26 handle="k8s-pod-network.0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.875 [INFO][3953] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.131/26] handle="k8s-pod-network.0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.875 [INFO][3953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:28.961970 env[1343]: 2024-12-13 02:10:28.875 [INFO][3953] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.131/26] IPv6=[] ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" HandleID="k8s-pod-network.0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:28.963679 env[1343]: 2024-12-13 02:10:28.877 [INFO][3940] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-pgkrc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0", GenerateName:"calico-apiserver-77fb7456f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"77407504-933f-4624-af4c-dd5aec0d5323", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fb7456f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-77fb7456f4-pgkrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69c1f11a324", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:28.963679 env[1343]: 2024-12-13 02:10:28.878 [INFO][3940] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.131/32] ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-pgkrc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:28.963679 env[1343]: 2024-12-13 02:10:28.878 [INFO][3940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69c1f11a324 ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-pgkrc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:28.963679 env[1343]: 2024-12-13 02:10:28.927 [INFO][3940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-pgkrc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:28.963679 env[1343]: 2024-12-13 02:10:28.927 [INFO][3940] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-pgkrc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0", GenerateName:"calico-apiserver-77fb7456f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"77407504-933f-4624-af4c-dd5aec0d5323", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fb7456f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1", Pod:"calico-apiserver-77fb7456f4-pgkrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69c1f11a324", MAC:"26:71:b1:fc:cb:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:28.963679 env[1343]: 2024-12-13 02:10:28.949 [INFO][3940] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1" Namespace="calico-apiserver" Pod="calico-apiserver-77fb7456f4-pgkrc" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:29.033205 systemd-networkd[1083]: cali16ce1578814: Link UP Dec 13 02:10:29.044422 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali16ce1578814: link becomes ready Dec 13 02:10:29.053187 systemd-networkd[1083]: cali16ce1578814: Gained carrier Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.653 [INFO][3929] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.707 [INFO][3929] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0 csi-node-driver- calico-system 58caca33-88e9-4a41-9735-56d04f40c4b1 798 0 2024-12-13 02:10:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal csi-node-driver-mc2dz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali16ce1578814 [] []}} ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Namespace="calico-system" Pod="csi-node-driver-mc2dz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.707 [INFO][3929] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Namespace="calico-system" Pod="csi-node-driver-mc2dz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.894 [INFO][3975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" HandleID="k8s-pod-network.0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.955 [INFO][3975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" HandleID="k8s-pod-network.0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038a620), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", "pod":"csi-node-driver-mc2dz", "timestamp":"2024-12-13 02:10:28.894154318 +0000 UTC"}, Hostname:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.955 [INFO][3975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.955 [INFO][3975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.955 [INFO][3975] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.958 [INFO][3975] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.969 [INFO][3975] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.976 [INFO][3975] ipam/ipam.go 489: Trying affinity for 192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.979 [INFO][3975] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.982 [INFO][3975] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.982 [INFO][3975] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.986 [INFO][3975] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213 Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:28.994 [INFO][3975] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:29.010 [INFO][3975] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.132/26] block=192.168.89.128/26 handle="k8s-pod-network.0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:29.011 [INFO][3975] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.132/26] handle="k8s-pod-network.0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:29.011 [INFO][3975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:29.085207 env[1343]: 2024-12-13 02:10:29.011 [INFO][3975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.132/26] IPv6=[] ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" HandleID="k8s-pod-network.0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:29.087027 env[1343]: 2024-12-13 02:10:29.015 [INFO][3929] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Namespace="calico-system" Pod="csi-node-driver-mc2dz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58caca33-88e9-4a41-9735-56d04f40c4b1", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-mc2dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16ce1578814", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:29.087027 env[1343]: 2024-12-13 02:10:29.015 [INFO][3929] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.132/32] ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Namespace="calico-system" Pod="csi-node-driver-mc2dz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:29.087027 env[1343]: 2024-12-13 02:10:29.015 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16ce1578814 ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Namespace="calico-system" Pod="csi-node-driver-mc2dz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:29.087027 env[1343]: 2024-12-13 02:10:29.060 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Namespace="calico-system" Pod="csi-node-driver-mc2dz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:29.087027 env[1343]: 2024-12-13 02:10:29.061 [INFO][3929] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Namespace="calico-system" Pod="csi-node-driver-mc2dz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58caca33-88e9-4a41-9735-56d04f40c4b1", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213", Pod:"csi-node-driver-mc2dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16ce1578814", MAC:"fe:f8:0a:c6:f3:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:29.087027 env[1343]: 2024-12-13 02:10:29.081 [INFO][3929] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213" Namespace="calico-system" Pod="csi-node-driver-mc2dz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:29.219429 env[1343]: time="2024-12-13T02:10:29.219278157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:29.220632 env[1343]: time="2024-12-13T02:10:29.220506433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:29.220632 env[1343]: time="2024-12-13T02:10:29.220553086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:29.221194 env[1343]: time="2024-12-13T02:10:29.221109870Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1 pid=4040 runtime=io.containerd.runc.v2 Dec 13 02:10:29.221833 env[1343]: time="2024-12-13T02:10:29.221676837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:29.222001 env[1343]: time="2024-12-13T02:10:29.221967545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:29.222226 env[1343]: time="2024-12-13T02:10:29.222170304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:29.222840 env[1343]: time="2024-12-13T02:10:29.222774974Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213 pid=4053 runtime=io.containerd.runc.v2 Dec 13 02:10:29.278037 systemd-networkd[1083]: cali667de823310: Link UP Dec 13 02:10:29.289972 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali667de823310: link becomes ready Dec 13 02:10:29.291906 systemd-networkd[1083]: cali667de823310: Gained carrier Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:28.828 [INFO][3966] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:28.900 [INFO][3966] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0 calico-kube-controllers-758847f549- calico-system 80fc8685-542f-4623-8a52-98ad685ebdfb 800 0 2024-12-13 02:10:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:758847f549 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal calico-kube-controllers-758847f549-2wzrz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali667de823310 [] []}} ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Namespace="calico-system" Pod="calico-kube-controllers-758847f549-2wzrz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:28.900 [INFO][3966] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Namespace="calico-system" Pod="calico-kube-controllers-758847f549-2wzrz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.124 [INFO][3999] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" HandleID="k8s-pod-network.4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.178 [INFO][3999] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" HandleID="k8s-pod-network.4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000259060), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", "pod":"calico-kube-controllers-758847f549-2wzrz", "timestamp":"2024-12-13 02:10:29.123798931 +0000 UTC"}, Hostname:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.178 [INFO][3999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.178 [INFO][3999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.178 [INFO][3999] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.183 [INFO][3999] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.190 [INFO][3999] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.198 [INFO][3999] ipam/ipam.go 489: Trying affinity for 192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.200 [INFO][3999] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.204 [INFO][3999] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.204 [INFO][3999] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.210 [INFO][3999] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.216 [INFO][3999] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.225 [INFO][3999] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.133/26] block=192.168.89.128/26 handle="k8s-pod-network.4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.225 [INFO][3999] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.133/26] handle="k8s-pod-network.4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.226 [INFO][3999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:29.316674 env[1343]: 2024-12-13 02:10:29.226 [INFO][3999] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.133/26] IPv6=[] ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" HandleID="k8s-pod-network.4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:29.318098 env[1343]: 2024-12-13 02:10:29.242 [INFO][3966] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Namespace="calico-system" Pod="calico-kube-controllers-758847f549-2wzrz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0", GenerateName:"calico-kube-controllers-758847f549-", Namespace:"calico-system", SelfLink:"", UID:"80fc8685-542f-4623-8a52-98ad685ebdfb", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"758847f549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-758847f549-2wzrz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali667de823310", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:29.318098 env[1343]: 2024-12-13 02:10:29.242 [INFO][3966] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.133/32] ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Namespace="calico-system" Pod="calico-kube-controllers-758847f549-2wzrz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:29.318098 env[1343]: 2024-12-13 02:10:29.242 [INFO][3966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali667de823310 ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Namespace="calico-system" Pod="calico-kube-controllers-758847f549-2wzrz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:29.318098 env[1343]: 2024-12-13 02:10:29.293 [INFO][3966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Namespace="calico-system" Pod="calico-kube-controllers-758847f549-2wzrz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:29.318098 env[1343]: 2024-12-13 02:10:29.294 [INFO][3966] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Namespace="calico-system" Pod="calico-kube-controllers-758847f549-2wzrz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0", GenerateName:"calico-kube-controllers-758847f549-", Namespace:"calico-system", SelfLink:"", UID:"80fc8685-542f-4623-8a52-98ad685ebdfb", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"758847f549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb", Pod:"calico-kube-controllers-758847f549-2wzrz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali667de823310", MAC:"0a:67:52:c8:1e:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:29.318098 env[1343]: 2024-12-13 02:10:29.312 [INFO][3966] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb" Namespace="calico-system" Pod="calico-kube-controllers-758847f549-2wzrz" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:29.469190 env[1343]: time="2024-12-13T02:10:29.469098660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:29.469697 env[1343]: time="2024-12-13T02:10:29.469653481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:29.469885 env[1343]: time="2024-12-13T02:10:29.469852229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:29.470302 env[1343]: time="2024-12-13T02:10:29.470253226Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb pid=4124 runtime=io.containerd.runc.v2 Dec 13 02:10:29.480948 env[1343]: time="2024-12-13T02:10:29.480896133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mc2dz,Uid:58caca33-88e9-4a41-9735-56d04f40c4b1,Namespace:calico-system,Attempt:1,} returns sandbox id \"0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213\"" Dec 13 02:10:29.496616 env[1343]: time="2024-12-13T02:10:29.496563951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77fb7456f4-pgkrc,Uid:77407504-933f-4624-af4c-dd5aec0d5323,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1\"" Dec 13 02:10:29.587155 env[1343]: time="2024-12-13T02:10:29.587098481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758847f549-2wzrz,Uid:80fc8685-542f-4623-8a52-98ad685ebdfb,Namespace:calico-system,Attempt:1,} returns sandbox id \"4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb\"" Dec 13 02:10:29.662635 env[1343]: time="2024-12-13T02:10:29.662584253Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:29.667793 env[1343]: time="2024-12-13T02:10:29.667727412Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:29.672057 env[1343]: time="2024-12-13T02:10:29.672008177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:29.675049 env[1343]: time="2024-12-13T02:10:29.675000747Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:29.675685 env[1343]: time="2024-12-13T02:10:29.675625439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:10:29.678517 env[1343]: time="2024-12-13T02:10:29.678458436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 02:10:29.692630 env[1343]: time="2024-12-13T02:10:29.692567594Z" level=info msg="CreateContainer within sandbox \"e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:10:29.725931 env[1343]: time="2024-12-13T02:10:29.725802569Z" level=info msg="CreateContainer within sandbox \"e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d0329a903de1da3955a8c4ea46a6dfdecc1345d7e44a7bcf00f5a3d48c8f7b4f\"" Dec 13 02:10:29.730071 env[1343]: time="2024-12-13T02:10:29.730009641Z" level=info msg="StartContainer for \"d0329a903de1da3955a8c4ea46a6dfdecc1345d7e44a7bcf00f5a3d48c8f7b4f\"" Dec 13 02:10:29.917767 env[1343]: time="2024-12-13T02:10:29.917701083Z" level=info msg="StartContainer for \"d0329a903de1da3955a8c4ea46a6dfdecc1345d7e44a7bcf00f5a3d48c8f7b4f\" returns successfully" Dec 13 02:10:30.063520 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 02:10:30.063692 kernel: audit: type=1400 audit(1734055830.045:300): avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.045000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.045000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.096422 kernel: audit: type=1400 audit(1734055830.045:300): avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.045000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.120423 kernel: audit: type=1400 audit(1734055830.045:300): avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.045000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.143409 kernel: audit: type=1400 audit(1734055830.045:300): avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.147509 env[1343]: time="2024-12-13T02:10:30.147452972Z" level=info msg="StopPodSandbox for \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\"" Dec 13 02:10:30.045000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.175486 kernel: audit: type=1400 audit(1734055830.045:300): avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.045000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.213538 kernel: audit: type=1400 audit(1734055830.045:300): avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.045000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.244720 kernel: audit: type=1400 audit(1734055830.045:300): avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.269376 kernel: audit: type=1400 audit(1734055830.045:300): avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.045000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.045000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.291405 kernel: audit: type=1400 audit(1734055830.045:300): avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.045000 audit: BPF prog-id=10 op=LOAD Dec 13 02:10:30.045000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcce36b530 a2=98 a3=3 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.045000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.072000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.300471 kernel: audit: type=1334 audit(1734055830.045:300): prog-id=10 op=LOAD Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit: BPF prog-id=11 op=LOAD Dec 13 02:10:30.073000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcce36b310 a2=74 a3=540051 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.073000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.073000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.073000 audit: BPF prog-id=12 op=LOAD Dec 13 02:10:30.073000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcce36b340 a2=94 a3=2 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.073000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.073000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.412 [INFO][4258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.412 [INFO][4258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" iface="eth0" netns="/var/run/netns/cni-f15aae55-4e13-7ae4-fc75-a68a8ca98e44" Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.413 [INFO][4258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" iface="eth0" netns="/var/run/netns/cni-f15aae55-4e13-7ae4-fc75-a68a8ca98e44" Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.413 [INFO][4258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" iface="eth0" netns="/var/run/netns/cni-f15aae55-4e13-7ae4-fc75-a68a8ca98e44" Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.413 [INFO][4258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.413 [INFO][4258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.493 [INFO][4271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" HandleID="k8s-pod-network.1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.495 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.495 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.504 [WARNING][4271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" HandleID="k8s-pod-network.1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.504 [INFO][4271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" HandleID="k8s-pod-network.1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.506 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:30.510915 env[1343]: 2024-12-13 02:10:30.508 [INFO][4258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:30.514742 env[1343]: time="2024-12-13T02:10:30.514678375Z" level=info msg="TearDown network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\" successfully" Dec 13 02:10:30.514936 env[1343]: time="2024-12-13T02:10:30.514910253Z" level=info msg="StopPodSandbox for \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\" returns successfully" Dec 13 02:10:30.516050 env[1343]: time="2024-12-13T02:10:30.516014145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c5mt8,Uid:1f0b368d-f96a-4022-88da-c258681fa6eb,Namespace:kube-system,Attempt:1,}" Dec 13 02:10:30.613894 systemd[1]: run-containerd-runc-k8s.io-d0329a903de1da3955a8c4ea46a6dfdecc1345d7e44a7bcf00f5a3d48c8f7b4f-runc.gM7mQK.mount: Deactivated successfully. Dec 13 02:10:30.614130 systemd[1]: run-netns-cni\x2df15aae55\x2d4e13\x2d7ae4\x2dfc75\x2da68a8ca98e44.mount: Deactivated successfully. Dec 13 02:10:30.633000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.633000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.633000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.633000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.633000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.633000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.633000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.633000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.633000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.633000 audit: BPF prog-id=13 op=LOAD Dec 13 02:10:30.633000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcce36b200 a2=40 a3=1 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.633000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.640000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:10:30.640000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.640000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcce36b2d0 a2=50 a3=7ffcce36b3b0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.640000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.687000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.687000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcce36b210 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.687000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcce36b240 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcce36b150 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcce36b260 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcce36b240 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcce36b230 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcce36b260 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcce36b240 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcce36b260 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcce36b230 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcce36b2a0 a2=28 a3=0 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcce36b050 a2=50 a3=1 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit: BPF prog-id=14 op=LOAD Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcce36b050 a2=94 a3=5 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcce36b100 a2=50 a3=1 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffcce36b220 a2=4 a3=38 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.688000 audit[4236]: AVC avc: denied { confidentiality } for pid=4236 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:10:30.688000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcce36b270 a2=94 a3=6 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.692000 audit[4236]: AVC avc: denied { confidentiality } for pid=4236 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:10:30.692000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcce36aa20 a2=94 a3=83 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { perfmon } for pid=4236 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { bpf } for pid=4236 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.693000 audit[4236]: AVC avc: denied { confidentiality } for pid=4236 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:10:30.693000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcce36aa20 a2=94 a3=83 items=0 ppid=4187 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 02:10:30.720000 audit[4293]: NETFILTER_CFG table=filter:101 family=2 entries=10 op=nft_register_rule pid=4293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:30.720000 audit[4293]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fffc6cd3f50 a2=0 a3=7fffc6cd3f3c items=0 ppid=2502 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:30.723000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.723000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.723000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.723000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.723000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.723000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.723000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.723000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.723000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.723000 audit: BPF prog-id=15 op=LOAD Dec 13 02:10:30.723000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc818c030 a2=98 a3=1999999999999999 items=0 ppid=4187 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.723000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 02:10:30.724000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit: BPF prog-id=16 op=LOAD Dec 13 02:10:30.724000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc818bf10 a2=74 a3=ffff items=0 ppid=4187 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.724000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 02:10:30.724000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { perfmon } for pid=4299 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit[4299]: AVC avc: denied { bpf } for pid=4299 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:30.724000 audit: BPF prog-id=17 op=LOAD Dec 13 02:10:30.724000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc818bf50 a2=40 a3=7ffcc818c130 items=0 ppid=4187 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.724000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 02:10:30.724000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:10:30.726000 audit[4293]: NETFILTER_CFG table=nat:102 family=2 entries=20 op=nft_register_rule pid=4293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:30.726000 audit[4293]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffc6cd3f50 a2=0 a3=7fffc6cd3f3c items=0 ppid=2502 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:30.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:30.744166 systemd-networkd[1083]: cali69c1f11a324: Gained IPv6LL Dec 13 02:10:30.932826 systemd-networkd[1083]: califd421477d71: Link UP Dec 13 02:10:30.962605 systemd-networkd[1083]: cali16ce1578814: Gained IPv6LL Dec 13 02:10:30.970430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:10:30.983435 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califd421477d71: link becomes ready Dec 13 02:10:30.996946 systemd-networkd[1083]: califd421477d71: Gained carrier Dec 13 02:10:31.037866 kubelet[2306]: I1213 02:10:31.037422 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77fb7456f4-8nsvc" podStartSLOduration=25.033444443 podStartE2EDuration="29.037305403s" podCreationTimestamp="2024-12-13 02:10:02 +0000 UTC" firstStartedPulling="2024-12-13 02:10:25.67234707 +0000 UTC m=+43.836641540" lastFinishedPulling="2024-12-13 02:10:29.676208021 +0000 UTC m=+47.840502500" observedRunningTime="2024-12-13 02:10:30.699817284 +0000 UTC m=+48.864111789" watchObservedRunningTime="2024-12-13 02:10:31.037305403 +0000 UTC m=+49.201599894" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.655 [INFO][4277] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0 coredns-76f75df574- kube-system 1f0b368d-f96a-4022-88da-c258681fa6eb 828 0 2024-12-13 02:09:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal coredns-76f75df574-c5mt8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd421477d71 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Namespace="kube-system" Pod="coredns-76f75df574-c5mt8" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.656 [INFO][4277] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Namespace="kube-system" Pod="coredns-76f75df574-c5mt8" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.836 [INFO][4292] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" HandleID="k8s-pod-network.16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.869 [INFO][4292] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" HandleID="k8s-pod-network.16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050980), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", "pod":"coredns-76f75df574-c5mt8", "timestamp":"2024-12-13 02:10:30.836260177 +0000 UTC"}, Hostname:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.870 [INFO][4292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.870 [INFO][4292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.870 [INFO][4292] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal' Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.872 [INFO][4292] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.877 [INFO][4292] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.883 [INFO][4292] ipam/ipam.go 489: Trying affinity for 192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.885 [INFO][4292] ipam/ipam.go 155: Attempting to load block cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.889 [INFO][4292] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.889 [INFO][4292] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.891 [INFO][4292] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.896 [INFO][4292] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.911 [INFO][4292] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.89.134/26] block=192.168.89.128/26 handle="k8s-pod-network.16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.911 [INFO][4292] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.134/26] handle="k8s-pod-network.16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" host="ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal" Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.911 [INFO][4292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:31.053506 env[1343]: 2024-12-13 02:10:30.911 [INFO][4292] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.134/26] IPv6=[] ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" HandleID="k8s-pod-network.16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:31.054932 env[1343]: 2024-12-13 02:10:30.920 [INFO][4277] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Namespace="kube-system" Pod="coredns-76f75df574-c5mt8" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1f0b368d-f96a-4022-88da-c258681fa6eb", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-c5mt8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd421477d71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:31.054932 env[1343]: 2024-12-13 02:10:30.920 [INFO][4277] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.89.134/32] ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Namespace="kube-system" Pod="coredns-76f75df574-c5mt8" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:31.054932 env[1343]: 2024-12-13 02:10:30.921 [INFO][4277] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd421477d71 ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Namespace="kube-system" Pod="coredns-76f75df574-c5mt8" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:31.054932 env[1343]: 2024-12-13 02:10:30.963 [INFO][4277] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Namespace="kube-system" Pod="coredns-76f75df574-c5mt8" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:31.054932 env[1343]: 2024-12-13 02:10:31.002 [INFO][4277] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Namespace="kube-system" Pod="coredns-76f75df574-c5mt8" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1f0b368d-f96a-4022-88da-c258681fa6eb", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a", Pod:"coredns-76f75df574-c5mt8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd421477d71", MAC:"ce:75:f6:6b:29:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:31.054932 env[1343]: 2024-12-13 02:10:31.043 [INFO][4277] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a" Namespace="kube-system" Pod="coredns-76f75df574-c5mt8" WorkloadEndpoint="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:31.118173 systemd-networkd[1083]: vxlan.calico: Link UP Dec 13 02:10:31.118185 systemd-networkd[1083]: vxlan.calico: Gained carrier Dec 13 02:10:31.169659 env[1343]: time="2024-12-13T02:10:31.169564120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:31.169941 env[1343]: time="2024-12-13T02:10:31.169903094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:31.170078 env[1343]: time="2024-12-13T02:10:31.170048061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:31.170453 env[1343]: time="2024-12-13T02:10:31.170401336Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a pid=4337 runtime=io.containerd.runc.v2 Dec 13 02:10:31.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.239000 audit: BPF prog-id=18 op=LOAD Dec 13 02:10:31.239000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceffdd510 a2=98 a3=ffffffff items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit: BPF prog-id=19 op=LOAD Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceffdd320 a2=74 a3=540051 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit: BPF prog-id=20 op=LOAD Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceffdd350 a2=94 a3=2 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceffdd220 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceffdd250 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceffdd160 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceffdd270 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceffdd250 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceffdd240 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceffdd270 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceffdd250 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceffdd270 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceffdd240 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceffdd2b0 a2=28 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.243000 audit: BPF prog-id=21 op=LOAD Dec 13 02:10:31.243000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffceffdd120 a2=40 a3=0 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.243000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:10:31.254513 systemd-networkd[1083]: cali667de823310: Gained IPv6LL Dec 13 02:10:31.262000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.262000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffceffdd110 a2=50 a3=2800 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.262000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffceffdd110 a2=50 a3=2800 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.263000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit: BPF prog-id=22 op=LOAD Dec 13 02:10:31.263000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffceffdc930 a2=94 a3=2 items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.263000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.263000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.263000 audit: BPF prog-id=23 op=LOAD Dec 13 02:10:31.263000 audit[4359]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffceffdca30 a2=94 a3=2d items=0 ppid=4187 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.263000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit: BPF prog-id=24 op=LOAD Dec 13 02:10:31.277000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc00cc970 a2=98 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.277000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.277000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit: BPF prog-id=25 op=LOAD Dec 13 02:10:31.277000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc00cc750 a2=74 a3=540051 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.277000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.277000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.277000 audit: BPF prog-id=26 op=LOAD Dec 13 02:10:31.277000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc00cc780 a2=94 a3=2 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.277000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.277000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:10:31.332170 systemd[1]: run-containerd-runc-k8s.io-16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a-runc.ZZT24d.mount: Deactivated successfully. Dec 13 02:10:31.420867 env[1343]: time="2024-12-13T02:10:31.420796877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c5mt8,Uid:1f0b368d-f96a-4022-88da-c258681fa6eb,Namespace:kube-system,Attempt:1,} returns sandbox id \"16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a\"" Dec 13 02:10:31.424608 env[1343]: time="2024-12-13T02:10:31.424556266Z" level=info msg="CreateContainer within sandbox \"16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:10:31.473444 env[1343]: time="2024-12-13T02:10:31.473365134Z" level=info msg="CreateContainer within sandbox \"16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f8fcc66e10a8a986e5262d7f782a04a7d5ef93e600d22d6ee9f5456c9dc44a7\"" Dec 13 02:10:31.474658 env[1343]: time="2024-12-13T02:10:31.474613373Z" level=info msg="StartContainer for \"0f8fcc66e10a8a986e5262d7f782a04a7d5ef93e600d22d6ee9f5456c9dc44a7\"" Dec 13 02:10:31.697415 kubelet[2306]: I1213 02:10:31.695017 2306 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:10:31.747636 env[1343]: time="2024-12-13T02:10:31.747574468Z" level=info msg="StartContainer for \"0f8fcc66e10a8a986e5262d7f782a04a7d5ef93e600d22d6ee9f5456c9dc44a7\" returns successfully" Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit: BPF prog-id=27 op=LOAD Dec 13 02:10:31.831000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc00cc640 a2=40 a3=1 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.831000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.831000 audit: BPF prog-id=27 op=UNLOAD Dec 13 02:10:31.831000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.831000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffdc00cc710 a2=50 a3=7ffdc00cc7f0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.831000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc00cc650 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc00cc680 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc00cc590 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc00cc6a0 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc00cc680 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc00cc670 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc00cc6a0 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc00cc680 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc00cc6a0 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc00cc670 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc00cc6e0 a2=28 a3=0 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdc00cc490 a2=50 a3=1 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit: BPF prog-id=28 op=LOAD Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdc00cc490 a2=94 a3=5 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit: BPF prog-id=28 op=UNLOAD Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdc00cc540 a2=50 a3=1 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.874000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.874000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffdc00cc660 a2=4 a3=38 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { confidentiality } for pid=4368 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:10:31.875000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdc00cc6b0 a2=94 a3=6 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.875000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { confidentiality } for pid=4368 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:10:31.875000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdc00cbe60 a2=94 a3=83 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.875000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.875000 audit[4368]: AVC avc: denied { confidentiality } for pid=4368 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 02:10:31.875000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdc00cbe60 a2=94 a3=83 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.875000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.876000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.876000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdc00cd8a0 a2=10 a3=f1f00800 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.881632 env[1343]: time="2024-12-13T02:10:31.878566562Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:31.876000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.881000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.881000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdc00cd740 a2=10 a3=3 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.881000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.881000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdc00cd6e0 a2=10 a3=3 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.881000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 02:10:31.881000 audit[4368]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdc00cd6e0 a2=10 a3=7 items=0 ppid=4187 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 02:10:31.888949 env[1343]: time="2024-12-13T02:10:31.884617771Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:31.889799 env[1343]: time="2024-12-13T02:10:31.889754018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:31.890000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:10:31.890000 audit[2165]: SYSCALL arch=c000003e syscall=202 success=yes exit=1 a0=c00009d148 a1=81 a2=1 a3=0 items=0 ppid=2030 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:31.890000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3132382E302E3438002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265 Dec 13 02:10:31.896716 env[1343]: time="2024-12-13T02:10:31.896675405Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:31.897235 env[1343]: time="2024-12-13T02:10:31.897201788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 02:10:31.902043 env[1343]: time="2024-12-13T02:10:31.901973228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:10:31.903347 env[1343]: time="2024-12-13T02:10:31.902810430Z" level=info msg="CreateContainer within sandbox \"0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 02:10:31.947679 env[1343]: time="2024-12-13T02:10:31.947555255Z" level=info msg="CreateContainer within sandbox \"0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e403e290870be0330bb5db76c9ac658a29735d1f1347a6cdb76f3b00be829531\"" Dec 13 02:10:31.948825 env[1343]: time="2024-12-13T02:10:31.948788218Z" level=info msg="StartContainer for \"e403e290870be0330bb5db76c9ac658a29735d1f1347a6cdb76f3b00be829531\"" Dec 13 02:10:32.034929 systemd[1]: run-containerd-runc-k8s.io-e403e290870be0330bb5db76c9ac658a29735d1f1347a6cdb76f3b00be829531-runc.BT8MvP.mount: Deactivated successfully. Dec 13 02:10:32.155200 env[1343]: time="2024-12-13T02:10:32.154999510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:32.165682 env[1343]: time="2024-12-13T02:10:32.165627748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:32.180034 env[1343]: time="2024-12-13T02:10:32.176747326Z" level=info msg="StartContainer for \"e403e290870be0330bb5db76c9ac658a29735d1f1347a6cdb76f3b00be829531\" returns successfully" Dec 13 02:10:32.184182 env[1343]: time="2024-12-13T02:10:32.184135882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:32.191526 env[1343]: time="2024-12-13T02:10:32.191477187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:32.191886 env[1343]: time="2024-12-13T02:10:32.191840995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:10:32.199680 env[1343]: time="2024-12-13T02:10:32.199562346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 02:10:32.201286 env[1343]: time="2024-12-13T02:10:32.201224078Z" level=info msg="CreateContainer within sandbox \"0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:10:32.197000 audit[4480]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=4480 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:10:32.197000 audit[4480]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffca49645d0 a2=0 a3=7ffca49645bc items=0 ppid=4187 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:32.197000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:10:32.227906 env[1343]: time="2024-12-13T02:10:32.227819001Z" level=info msg="CreateContainer within sandbox \"0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0157cc331b7327f2d168bd572ecf754b65265a35cf9ccc53e414aad088fd187a\"" Dec 13 02:10:32.228865 env[1343]: time="2024-12-13T02:10:32.228828495Z" level=info msg="StartContainer for \"0157cc331b7327f2d168bd572ecf754b65265a35cf9ccc53e414aad088fd187a\"" Dec 13 02:10:32.241000 audit[4479]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=4479 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:10:32.241000 audit[4479]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc82bdf7a0 a2=0 a3=7ffc82bdf78c items=0 ppid=4187 pid=4479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:32.241000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:10:32.269000 audit[4478]: NETFILTER_CFG table=raw:105 family=2 entries=21 op=nft_register_chain pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:10:32.269000 audit[4478]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd72eb1bb0 a2=0 a3=7ffd72eb1b9c items=0 ppid=4187 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:32.269000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:10:32.300000 audit[4502]: NETFILTER_CFG table=filter:106 family=2 entries=215 op=nft_register_chain pid=4502 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 02:10:32.300000 audit[4502]: SYSCALL arch=c000003e syscall=46 success=yes exit=125772 a0=3 a1=7ffe858c7ec0 a2=0 a3=7ffe858c7eac items=0 ppid=4187 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:32.300000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 02:10:32.436466 env[1343]: time="2024-12-13T02:10:32.436364560Z" level=info msg="StartContainer for \"0157cc331b7327f2d168bd572ecf754b65265a35cf9ccc53e414aad088fd187a\" returns successfully" Dec 13 02:10:32.599200 systemd-networkd[1083]: vxlan.calico: Gained IPv6LL Dec 13 02:10:32.719097 kubelet[2306]: I1213 02:10:32.719050 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77fb7456f4-pgkrc" podStartSLOduration=28.024218228 podStartE2EDuration="30.718979888s" podCreationTimestamp="2024-12-13 02:10:02 +0000 UTC" firstStartedPulling="2024-12-13 02:10:29.498297442 +0000 UTC m=+47.662591923" lastFinishedPulling="2024-12-13 02:10:32.19305912 +0000 UTC m=+50.357353583" observedRunningTime="2024-12-13 02:10:32.717828493 +0000 UTC m=+50.882122983" watchObservedRunningTime="2024-12-13 02:10:32.718979888 +0000 UTC m=+50.883274377" Dec 13 02:10:32.762000 audit[4530]: NETFILTER_CFG table=filter:107 family=2 entries=10 op=nft_register_rule pid=4530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:32.762000 audit[4530]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffea6fb1a40 a2=0 a3=7ffea6fb1a2c items=0 ppid=2502 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:32.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:32.768000 audit[4530]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=4530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:32.768000 audit[4530]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffea6fb1a40 a2=0 a3=7ffea6fb1a2c items=0 ppid=2502 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:32.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:32.794000 audit[4532]: NETFILTER_CFG table=filter:109 family=2 entries=10 op=nft_register_rule pid=4532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:32.794000 audit[4532]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff0b9267b0 a2=0 a3=7fff0b92679c items=0 ppid=2502 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:32.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:32.800000 audit[4532]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:32.800000 audit[4532]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff0b9267b0 a2=0 a3=7fff0b92679c items=0 ppid=2502 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:32.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:32.984106 systemd-networkd[1083]: califd421477d71: Gained IPv6LL Dec 13 02:10:33.269522 kubelet[2306]: I1213 02:10:33.268819 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-c5mt8" podStartSLOduration=37.268759326 podStartE2EDuration="37.268759326s" podCreationTimestamp="2024-12-13 02:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:32.737536373 +0000 UTC m=+50.901830863" watchObservedRunningTime="2024-12-13 02:10:33.268759326 +0000 UTC m=+51.433053820" Dec 13 02:10:33.837000 audit[4535]: NETFILTER_CFG table=filter:111 family=2 entries=10 op=nft_register_rule pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:33.837000 audit[4535]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff316c7be0 a2=0 a3=7fff316c7bcc items=0 ppid=2502 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:33.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:33.861000 audit[4535]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:33.861000 audit[4535]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff316c7be0 a2=0 a3=7fff316c7bcc items=0 ppid=2502 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:33.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:34.878000 audit[4539]: NETFILTER_CFG table=filter:113 family=2 entries=9 op=nft_register_rule pid=4539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:34.878000 audit[4539]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd919fc590 a2=0 a3=7ffd919fc57c items=0 ppid=2502 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:34.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:34.883000 audit[4539]: NETFILTER_CFG table=nat:114 family=2 entries=27 op=nft_register_chain pid=4539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:34.883000 audit[4539]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd919fc590 a2=0 a3=7ffd919fc57c items=0 ppid=2502 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:34.883000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:34.952016 env[1343]: time="2024-12-13T02:10:34.951937413Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:34.955162 env[1343]: time="2024-12-13T02:10:34.955074581Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:34.958234 env[1343]: time="2024-12-13T02:10:34.957937609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:34.960081 env[1343]: time="2024-12-13T02:10:34.960010110Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:34.960813 env[1343]: time="2024-12-13T02:10:34.960768173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 02:10:34.965875 env[1343]: time="2024-12-13T02:10:34.965836534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 02:10:34.993756 env[1343]: time="2024-12-13T02:10:34.993697807Z" level=info msg="CreateContainer within sandbox \"4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 02:10:35.015969 env[1343]: time="2024-12-13T02:10:35.011367823Z" level=info msg="CreateContainer within sandbox \"4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9614734e64ea42471fda732a426d00c5aa0e51883e9a1b41b233d257a145bbd7\"" Dec 13 02:10:35.015969 env[1343]: time="2024-12-13T02:10:35.012526925Z" level=info msg="StartContainer for \"9614734e64ea42471fda732a426d00c5aa0e51883e9a1b41b233d257a145bbd7\"" Dec 13 02:10:35.148802 env[1343]: time="2024-12-13T02:10:35.148660362Z" level=info msg="StartContainer for \"9614734e64ea42471fda732a426d00c5aa0e51883e9a1b41b233d257a145bbd7\" returns successfully" Dec 13 02:10:35.187020 kubelet[2306]: I1213 02:10:35.186979 2306 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:10:35.869275 kubelet[2306]: I1213 02:10:35.868275 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-758847f549-2wzrz" podStartSLOduration=27.511125115 podStartE2EDuration="32.868211535s" podCreationTimestamp="2024-12-13 02:10:03 +0000 UTC" firstStartedPulling="2024-12-13 02:10:29.604078261 +0000 UTC m=+47.768372731" lastFinishedPulling="2024-12-13 02:10:34.961164625 +0000 UTC m=+53.125459151" observedRunningTime="2024-12-13 02:10:35.751018442 +0000 UTC m=+53.915312931" watchObservedRunningTime="2024-12-13 02:10:35.868211535 +0000 UTC m=+54.032506027" Dec 13 02:10:35.925423 kernel: kauditd_printk_skb: 502 callbacks suppressed Dec 13 02:10:35.925602 kernel: audit: type=1325 audit(1734055835.901:405): table=filter:115 family=2 entries=8 op=nft_register_rule pid=4598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:35.901000 audit[4598]: NETFILTER_CFG table=filter:115 family=2 entries=8 op=nft_register_rule pid=4598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:35.901000 audit[4598]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff9b85eea0 a2=0 a3=7fff9b85ee8c items=0 ppid=2502 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:35.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:35.975745 kernel: audit: type=1300 audit(1734055835.901:405): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff9b85eea0 a2=0 a3=7fff9b85ee8c items=0 ppid=2502 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:35.975931 kernel: audit: type=1327 audit(1734055835.901:405): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:35.960000 audit[4598]: NETFILTER_CFG table=nat:116 family=2 entries=34 op=nft_register_chain pid=4598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:35.960000 audit[4598]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff9b85eea0 a2=0 a3=7fff9b85ee8c items=0 ppid=2502 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:36.038108 kernel: audit: type=1325 audit(1734055835.960:406): table=nat:116 family=2 entries=34 op=nft_register_chain pid=4598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:10:36.038273 kernel: audit: type=1300 audit(1734055835.960:406): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff9b85eea0 a2=0 a3=7fff9b85ee8c items=0 ppid=2502 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:35.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:36.054418 kernel: audit: type=1327 audit(1734055835.960:406): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:10:36.228637 env[1343]: time="2024-12-13T02:10:36.228477399Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:36.231926 env[1343]: time="2024-12-13T02:10:36.231860252Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:36.234774 env[1343]: time="2024-12-13T02:10:36.234721741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:36.237551 env[1343]: time="2024-12-13T02:10:36.237485948Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:10:36.238583 env[1343]: time="2024-12-13T02:10:36.238527082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 02:10:36.242760 env[1343]: time="2024-12-13T02:10:36.242680629Z" level=info msg="CreateContainer within sandbox \"0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 02:10:36.268122 env[1343]: time="2024-12-13T02:10:36.268018332Z" level=info msg="CreateContainer within sandbox \"0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a386fdca554311ef1147870615ddd6198bccb3be82002a7dcaca0dfea5623272\"" Dec 13 02:10:36.271123 env[1343]: time="2024-12-13T02:10:36.269030098Z" level=info msg="StartContainer for \"a386fdca554311ef1147870615ddd6198bccb3be82002a7dcaca0dfea5623272\"" Dec 13 02:10:36.323371 systemd[1]: run-containerd-runc-k8s.io-a386fdca554311ef1147870615ddd6198bccb3be82002a7dcaca0dfea5623272-runc.6ojWCJ.mount: Deactivated successfully. Dec 13 02:10:36.374776 env[1343]: time="2024-12-13T02:10:36.374670640Z" level=info msg="StartContainer for \"a386fdca554311ef1147870615ddd6198bccb3be82002a7dcaca0dfea5623272\" returns successfully" Dec 13 02:10:36.397403 kubelet[2306]: I1213 02:10:36.397025 2306 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 02:10:36.397403 kubelet[2306]: I1213 02:10:36.397071 2306 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 02:10:40.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.48:22-139.178.68.195:41326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:40.739014 systemd[1]: Started sshd@9-10.128.0.48:22-139.178.68.195:41326.service. Dec 13 02:10:40.764553 kernel: audit: type=1130 audit(1734055840.737:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.48:22-139.178.68.195:41326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:41.087028 kernel: audit: type=1101 audit(1734055841.054:408): pid=4646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.054000 audit[4646]: USER_ACCT pid=4646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.087501 sshd[4646]: Accepted publickey for core from 139.178.68.195 port 41326 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:10:41.087363 sshd[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:41.085000 audit[4646]: CRED_ACQ pid=4646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.097247 systemd-logind[1329]: New session 8 of user core. Dec 13 02:10:41.099263 systemd[1]: Started session-8.scope. Dec 13 02:10:41.117091 kernel: audit: type=1103 audit(1734055841.085:409): pid=4646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.117228 kernel: audit: type=1006 audit(1734055841.085:410): pid=4646 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 13 02:10:41.085000 audit[4646]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd47d20960 a2=3 a3=0 items=0 ppid=1 pid=4646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:41.085000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:41.170338 kernel: audit: type=1300 audit(1734055841.085:410): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd47d20960 a2=3 a3=0 items=0 ppid=1 pid=4646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:41.170543 kernel: audit: type=1327 audit(1734055841.085:410): proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:41.170609 kernel: audit: type=1105 audit(1734055841.114:411): pid=4646 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.114000 audit[4646]: USER_START pid=4646 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.120000 audit[4649]: CRED_ACQ pid=4649 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.228289 kernel: audit: type=1103 audit(1734055841.120:412): pid=4649 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.464011 sshd[4646]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:41.500024 kernel: audit: type=1106 audit(1734055841.464:413): pid=4646 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.464000 audit[4646]: USER_END pid=4646 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.465000 audit[4646]: CRED_DISP pid=4646 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.502930 systemd[1]: sshd@9-10.128.0.48:22-139.178.68.195:41326.service: Deactivated successfully. Dec 13 02:10:41.504346 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:10:41.509043 systemd-logind[1329]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:10:41.510979 systemd-logind[1329]: Removed session 8. Dec 13 02:10:41.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.48:22-139.178.68.195:41326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:41.549280 kernel: audit: type=1104 audit(1734055841.465:414): pid=4646 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:41.549565 kernel: audit: type=1131 audit(1734055841.499:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.48:22-139.178.68.195:41326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:42.201786 env[1343]: time="2024-12-13T02:10:42.198860379Z" level=info msg="StopPodSandbox for \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\"" Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.256 [WARNING][4674] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1f0b368d-f96a-4022-88da-c258681fa6eb", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a", Pod:"coredns-76f75df574-c5mt8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd421477d71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.256 [INFO][4674] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.256 [INFO][4674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" iface="eth0" netns="" Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.256 [INFO][4674] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.256 [INFO][4674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.295 [INFO][4680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" HandleID="k8s-pod-network.1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.295 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.296 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.321 [WARNING][4680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" HandleID="k8s-pod-network.1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.322 [INFO][4680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" HandleID="k8s-pod-network.1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.323 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:42.327555 env[1343]: 2024-12-13 02:10:42.326 [INFO][4674] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:42.328489 env[1343]: time="2024-12-13T02:10:42.327581442Z" level=info msg="TearDown network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\" successfully" Dec 13 02:10:42.328489 env[1343]: time="2024-12-13T02:10:42.327625890Z" level=info msg="StopPodSandbox for \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\" returns successfully" Dec 13 02:10:42.328934 env[1343]: time="2024-12-13T02:10:42.328895311Z" level=info msg="RemovePodSandbox for \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\"" Dec 13 02:10:42.329126 env[1343]: time="2024-12-13T02:10:42.329068470Z" level=info msg="Forcibly stopping sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\"" Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.396 [WARNING][4706] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1f0b368d-f96a-4022-88da-c258681fa6eb", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"16319211956d290b331abab9bf0e43064c64957f582de1a949b37e0f9fba1d9a", Pod:"coredns-76f75df574-c5mt8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd421477d71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.396 [INFO][4706] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.396 [INFO][4706] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" iface="eth0" netns="" Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.396 [INFO][4706] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.396 [INFO][4706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.425 [INFO][4712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" HandleID="k8s-pod-network.1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.425 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.425 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.433 [WARNING][4712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" HandleID="k8s-pod-network.1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.433 [INFO][4712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" HandleID="k8s-pod-network.1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--c5mt8-eth0" Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.435 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:42.438586 env[1343]: 2024-12-13 02:10:42.437 [INFO][4706] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf" Dec 13 02:10:42.439606 env[1343]: time="2024-12-13T02:10:42.438631792Z" level=info msg="TearDown network for sandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\" successfully" Dec 13 02:10:42.445497 env[1343]: time="2024-12-13T02:10:42.445429983Z" level=info msg="RemovePodSandbox \"1033571f965b789c81b4fc7a52ba974e8dcf005baa14443ddadabecd5afa8adf\" returns successfully" Dec 13 02:10:42.446275 env[1343]: time="2024-12-13T02:10:42.446221143Z" level=info msg="StopPodSandbox for \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\"" Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.493 [WARNING][4731] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0", GenerateName:"calico-apiserver-77fb7456f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"77407504-933f-4624-af4c-dd5aec0d5323", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fb7456f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1", Pod:"calico-apiserver-77fb7456f4-pgkrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69c1f11a324", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.493 [INFO][4731] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.493 [INFO][4731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" iface="eth0" netns="" Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.493 [INFO][4731] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.493 [INFO][4731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.521 [INFO][4737] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" HandleID="k8s-pod-network.442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.521 [INFO][4737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.521 [INFO][4737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.549 [WARNING][4737] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" HandleID="k8s-pod-network.442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.550 [INFO][4737] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" HandleID="k8s-pod-network.442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.552 [INFO][4737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:42.559808 env[1343]: 2024-12-13 02:10:42.557 [INFO][4731] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:42.560898 env[1343]: time="2024-12-13T02:10:42.559851185Z" level=info msg="TearDown network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\" successfully" Dec 13 02:10:42.560898 env[1343]: time="2024-12-13T02:10:42.559893306Z" level=info msg="StopPodSandbox for \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\" returns successfully" Dec 13 02:10:42.560898 env[1343]: time="2024-12-13T02:10:42.560631557Z" level=info msg="RemovePodSandbox for \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\"" Dec 13 02:10:42.560898 env[1343]: time="2024-12-13T02:10:42.560677896Z" level=info msg="Forcibly stopping sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\"" Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.637 [WARNING][4758] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0", GenerateName:"calico-apiserver-77fb7456f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"77407504-933f-4624-af4c-dd5aec0d5323", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fb7456f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"0d1a1d74e3902ed4f15da2d68cb81935c325e98dfd8a09c9ef75c13be60ee8c1", Pod:"calico-apiserver-77fb7456f4-pgkrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69c1f11a324", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.637 [INFO][4758] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.637 [INFO][4758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" iface="eth0" netns="" Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.637 [INFO][4758] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.637 [INFO][4758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.663 [INFO][4764] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" HandleID="k8s-pod-network.442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.663 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.663 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.672 [WARNING][4764] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" HandleID="k8s-pod-network.442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.672 [INFO][4764] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" HandleID="k8s-pod-network.442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--pgkrc-eth0" Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.674 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:42.677427 env[1343]: 2024-12-13 02:10:42.675 [INFO][4758] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8" Dec 13 02:10:42.678278 env[1343]: time="2024-12-13T02:10:42.677455187Z" level=info msg="TearDown network for sandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\" successfully" Dec 13 02:10:42.683603 env[1343]: time="2024-12-13T02:10:42.683537534Z" level=info msg="RemovePodSandbox \"442126cf743753f2040a835fb27f4f37fc440cd64849d91667835c29ca032dc8\" returns successfully" Dec 13 02:10:42.684245 env[1343]: time="2024-12-13T02:10:42.684194159Z" level=info msg="StopPodSandbox for \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\"" Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.730 [WARNING][4783] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0", GenerateName:"calico-apiserver-77fb7456f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e610094-b7bd-43b3-a038-f2b1fd75f780", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fb7456f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e", Pod:"calico-apiserver-77fb7456f4-8nsvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c5d39de0e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.731 [INFO][4783] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.731 [INFO][4783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" iface="eth0" netns="" Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.731 [INFO][4783] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.731 [INFO][4783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.760 [INFO][4789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" HandleID="k8s-pod-network.5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.761 [INFO][4789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.761 [INFO][4789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.768 [WARNING][4789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" HandleID="k8s-pod-network.5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.768 [INFO][4789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" HandleID="k8s-pod-network.5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.770 [INFO][4789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:42.773702 env[1343]: 2024-12-13 02:10:42.772 [INFO][4783] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:42.774503 env[1343]: time="2024-12-13T02:10:42.774440118Z" level=info msg="TearDown network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\" successfully" Dec 13 02:10:42.774503 env[1343]: time="2024-12-13T02:10:42.774490055Z" level=info msg="StopPodSandbox for \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\" returns successfully" Dec 13 02:10:42.775204 env[1343]: time="2024-12-13T02:10:42.775155723Z" level=info msg="RemovePodSandbox for \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\"" Dec 13 02:10:42.775320 env[1343]: time="2024-12-13T02:10:42.775201902Z" level=info msg="Forcibly stopping sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\"" Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.826 [WARNING][4807] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0", GenerateName:"calico-apiserver-77fb7456f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e610094-b7bd-43b3-a038-f2b1fd75f780", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77fb7456f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"e21dfe1a962dd3362a0fb4b1e4ca2c9ebe68c4e946092b7af23671c07c59783e", Pod:"calico-apiserver-77fb7456f4-8nsvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c5d39de0e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.827 [INFO][4807] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.827 [INFO][4807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" iface="eth0" netns="" Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.827 [INFO][4807] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.827 [INFO][4807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.858 [INFO][4813] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" HandleID="k8s-pod-network.5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.858 [INFO][4813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.858 [INFO][4813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.865 [WARNING][4813] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" HandleID="k8s-pod-network.5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.865 [INFO][4813] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" HandleID="k8s-pod-network.5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--apiserver--77fb7456f4--8nsvc-eth0" Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.867 [INFO][4813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:42.869729 env[1343]: 2024-12-13 02:10:42.868 [INFO][4807] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f" Dec 13 02:10:42.869729 env[1343]: time="2024-12-13T02:10:42.869663359Z" level=info msg="TearDown network for sandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\" successfully" Dec 13 02:10:42.876898 env[1343]: time="2024-12-13T02:10:42.876801091Z" level=info msg="RemovePodSandbox \"5409755604a5648c671fd05c95418f6539d363602de19aa4150bdea3a3d2355f\" returns successfully" Dec 13 02:10:42.877551 env[1343]: time="2024-12-13T02:10:42.877510958Z" level=info msg="StopPodSandbox for \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\"" Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.929 [WARNING][4831] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0", GenerateName:"calico-kube-controllers-758847f549-", Namespace:"calico-system", SelfLink:"", UID:"80fc8685-542f-4623-8a52-98ad685ebdfb", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"758847f549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb", Pod:"calico-kube-controllers-758847f549-2wzrz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali667de823310", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.929 [INFO][4831] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.929 [INFO][4831] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" iface="eth0" netns="" Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.929 [INFO][4831] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.929 [INFO][4831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.962 [INFO][4838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" HandleID="k8s-pod-network.3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.962 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.962 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.969 [WARNING][4838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" HandleID="k8s-pod-network.3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.969 [INFO][4838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" HandleID="k8s-pod-network.3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.970 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:42.973444 env[1343]: 2024-12-13 02:10:42.972 [INFO][4831] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:42.974108 env[1343]: time="2024-12-13T02:10:42.973547560Z" level=info msg="TearDown network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\" successfully" Dec 13 02:10:42.974108 env[1343]: time="2024-12-13T02:10:42.973611393Z" level=info msg="StopPodSandbox for \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\" returns successfully" Dec 13 02:10:42.975058 env[1343]: time="2024-12-13T02:10:42.975005286Z" level=info msg="RemovePodSandbox for \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\"" Dec 13 02:10:42.975208 env[1343]: time="2024-12-13T02:10:42.975067630Z" level=info msg="Forcibly stopping sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\"" Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.021 [WARNING][4856] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0", GenerateName:"calico-kube-controllers-758847f549-", Namespace:"calico-system", SelfLink:"", UID:"80fc8685-542f-4623-8a52-98ad685ebdfb", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"758847f549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"4d15b54e4cb1d3f0a1fcd495b7886f4418bdc0fe8e296ca631b2063c0b1bf6fb", Pod:"calico-kube-controllers-758847f549-2wzrz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali667de823310", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.022 [INFO][4856] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.022 [INFO][4856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" iface="eth0" netns="" Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.022 [INFO][4856] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.022 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.051 [INFO][4862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" HandleID="k8s-pod-network.3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.052 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.052 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.059 [WARNING][4862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" HandleID="k8s-pod-network.3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.059 [INFO][4862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" HandleID="k8s-pod-network.3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-calico--kube--controllers--758847f549--2wzrz-eth0" Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.061 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:43.063504 env[1343]: 2024-12-13 02:10:43.062 [INFO][4856] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049" Dec 13 02:10:43.064344 env[1343]: time="2024-12-13T02:10:43.063532963Z" level=info msg="TearDown network for sandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\" successfully" Dec 13 02:10:43.068867 env[1343]: time="2024-12-13T02:10:43.068811500Z" level=info msg="RemovePodSandbox \"3099da821f5bf00df06b0eeea367ac9d2e1376c207240a9a2a1a6a36baa56049\" returns successfully" Dec 13 02:10:43.069533 env[1343]: time="2024-12-13T02:10:43.069478692Z" level=info msg="StopPodSandbox for \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\"" Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.125 [WARNING][4880] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58caca33-88e9-4a41-9735-56d04f40c4b1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213", Pod:"csi-node-driver-mc2dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16ce1578814", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.126 [INFO][4880] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.126 [INFO][4880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" iface="eth0" netns="" Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.126 [INFO][4880] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.126 [INFO][4880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.157 [INFO][4886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" HandleID="k8s-pod-network.c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.158 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.158 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.171 [WARNING][4886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" HandleID="k8s-pod-network.c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.171 [INFO][4886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" HandleID="k8s-pod-network.c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.173 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:43.179354 env[1343]: 2024-12-13 02:10:43.175 [INFO][4880] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:43.179354 env[1343]: time="2024-12-13T02:10:43.176900886Z" level=info msg="TearDown network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\" successfully" Dec 13 02:10:43.179354 env[1343]: time="2024-12-13T02:10:43.176941813Z" level=info msg="StopPodSandbox for \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\" returns successfully" Dec 13 02:10:43.179354 env[1343]: time="2024-12-13T02:10:43.177555494Z" level=info msg="RemovePodSandbox for \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\"" Dec 13 02:10:43.179354 env[1343]: time="2024-12-13T02:10:43.177598382Z" level=info msg="Forcibly stopping sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\"" Dec 13 02:10:43.297994 systemd[1]: run-containerd-runc-k8s.io-9614734e64ea42471fda732a426d00c5aa0e51883e9a1b41b233d257a145bbd7-runc.An7lSX.mount: Deactivated successfully. Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.233 [WARNING][4905] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58caca33-88e9-4a41-9735-56d04f40c4b1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"0d87bcd5f97afbaebd2ffa453fce5850df8531569eef935c9eb2a3032d473213", Pod:"csi-node-driver-mc2dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16ce1578814", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.234 [INFO][4905] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.234 [INFO][4905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" iface="eth0" netns="" Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.234 [INFO][4905] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.234 [INFO][4905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.292 [INFO][4911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" HandleID="k8s-pod-network.c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.292 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.292 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.305 [WARNING][4911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" HandleID="k8s-pod-network.c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.305 [INFO][4911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" HandleID="k8s-pod-network.c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-csi--node--driver--mc2dz-eth0" Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.307 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:43.311133 env[1343]: 2024-12-13 02:10:43.309 [INFO][4905] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5" Dec 13 02:10:43.312729 env[1343]: time="2024-12-13T02:10:43.311158829Z" level=info msg="TearDown network for sandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\" successfully" Dec 13 02:10:43.319673 env[1343]: time="2024-12-13T02:10:43.319580041Z" level=info msg="RemovePodSandbox \"c8bc033597f3224167fc41a6b47a07a2d4dad6403c885ced0f96f6adac22d3b5\" returns successfully" Dec 13 02:10:43.320637 env[1343]: time="2024-12-13T02:10:43.320589882Z" level=info msg="StopPodSandbox for \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\"" Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.388 [WARNING][4947] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"755a7ddd-d1f9-477d-b8ad-3e9f709e61fd", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120", Pod:"coredns-76f75df574-7mq9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8652c255186", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.388 [INFO][4947] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.388 [INFO][4947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" iface="eth0" netns="" Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.388 [INFO][4947] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.388 [INFO][4947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.420 [INFO][4957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" HandleID="k8s-pod-network.7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.420 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.420 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.427 [WARNING][4957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" HandleID="k8s-pod-network.7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.427 [INFO][4957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" HandleID="k8s-pod-network.7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.428 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:43.433695 env[1343]: 2024-12-13 02:10:43.430 [INFO][4947] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:43.433695 env[1343]: time="2024-12-13T02:10:43.432194392Z" level=info msg="TearDown network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\" successfully" Dec 13 02:10:43.433695 env[1343]: time="2024-12-13T02:10:43.432240707Z" level=info msg="StopPodSandbox for \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\" returns successfully" Dec 13 02:10:43.433695 env[1343]: time="2024-12-13T02:10:43.433539057Z" level=info msg="RemovePodSandbox for \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\"" Dec 13 02:10:43.433695 env[1343]: time="2024-12-13T02:10:43.433612615Z" level=info msg="Forcibly stopping sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\"" Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.489 [WARNING][4975] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"755a7ddd-d1f9-477d-b8ad-3e9f709e61fd", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 9, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-6-43f1d61fe5a43f7f03c1.c.flatcar-212911.internal", ContainerID:"887dc0366f68af5157b5e758b8b6b7c943019b2c977f7148df002823a2c6f120", Pod:"coredns-76f75df574-7mq9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8652c255186", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.489 [INFO][4975] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.489 [INFO][4975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" iface="eth0" netns="" Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.490 [INFO][4975] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.490 [INFO][4975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.521 [INFO][4981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" HandleID="k8s-pod-network.7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.522 [INFO][4981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.522 [INFO][4981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.529 [WARNING][4981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" HandleID="k8s-pod-network.7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.529 [INFO][4981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" HandleID="k8s-pod-network.7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Workload="ci--3510--3--6--43f1d61fe5a43f7f03c1.c.flatcar--212911.internal-k8s-coredns--76f75df574--7mq9l-eth0" Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.531 [INFO][4981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:10:43.533757 env[1343]: 2024-12-13 02:10:43.532 [INFO][4975] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789" Dec 13 02:10:43.534652 env[1343]: time="2024-12-13T02:10:43.533780609Z" level=info msg="TearDown network for sandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\" successfully" Dec 13 02:10:43.539936 env[1343]: time="2024-12-13T02:10:43.539873753Z" level=info msg="RemovePodSandbox \"7bca3a3f6243c642a90ce41eb21be376aa2a5a18be9eaff96f3097f6726d8789\" returns successfully" Dec 13 02:10:46.509941 systemd[1]: Started sshd@10-10.128.0.48:22-139.178.68.195:33984.service. Dec 13 02:10:46.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.48:22-139.178.68.195:33984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:46.538198 kernel: audit: type=1130 audit(1734055846.509:416): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.48:22-139.178.68.195:33984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:46.837449 kernel: audit: type=1101 audit(1734055846.806:417): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:46.806000 audit[4987]: USER_ACCT pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:46.837763 sshd[4987]: Accepted publickey for core from 139.178.68.195 port 33984 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:10:46.838183 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:46.837000 audit[4987]: CRED_ACQ pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:46.853054 systemd[1]: Started session-9.scope. Dec 13 02:10:46.854321 systemd-logind[1329]: New session 9 of user core. Dec 13 02:10:46.867807 kernel: audit: type=1103 audit(1734055846.837:418): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:46.837000 audit[4987]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe262c87c0 a2=3 a3=0 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:46.915327 kernel: audit: type=1006 audit(1734055846.837:419): pid=4987 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 02:10:46.915494 kernel: audit: type=1300 audit(1734055846.837:419): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe262c87c0 a2=3 a3=0 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:46.915579 kernel: audit: type=1327 audit(1734055846.837:419): proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:46.837000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:46.865000 audit[4987]: USER_START pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:46.925436 kernel: audit: type=1105 audit(1734055846.865:420): pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:46.957690 kernel: audit: type=1103 audit(1734055846.868:421): pid=4990 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:46.868000 audit[4990]: CRED_ACQ pid=4990 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:47.130661 sshd[4987]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:47.133000 audit[4987]: USER_END pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:47.137316 systemd[1]: sshd@10-10.128.0.48:22-139.178.68.195:33984.service: Deactivated successfully. Dec 13 02:10:47.138668 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:10:47.167479 kernel: audit: type=1106 audit(1734055847.133:422): pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:47.174579 systemd[1]: run-containerd-runc-k8s.io-9a1f513ad2547062f552f99f115ea44e10899cddd274780d466c01cc5f13aecc-runc.FTSpWI.mount: Deactivated successfully. Dec 13 02:10:47.179376 systemd-logind[1329]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:10:47.133000 audit[4987]: CRED_DISP pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:47.181726 systemd-logind[1329]: Removed session 9. Dec 13 02:10:47.204465 kernel: audit: type=1104 audit(1734055847.133:423): pid=4987 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:47.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.48:22-139.178.68.195:33984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:47.277520 kubelet[2306]: I1213 02:10:47.277453 2306 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-mc2dz" podStartSLOduration=37.521247456 podStartE2EDuration="44.277367598s" podCreationTimestamp="2024-12-13 02:10:03 +0000 UTC" firstStartedPulling="2024-12-13 02:10:29.482924312 +0000 UTC m=+47.647218794" lastFinishedPulling="2024-12-13 02:10:36.239044465 +0000 UTC m=+54.403338936" observedRunningTime="2024-12-13 02:10:36.753995309 +0000 UTC m=+54.918289818" watchObservedRunningTime="2024-12-13 02:10:47.277367598 +0000 UTC m=+65.441662088" Dec 13 02:10:51.300259 systemd[1]: run-containerd-runc-k8s.io-9614734e64ea42471fda732a426d00c5aa0e51883e9a1b41b233d257a145bbd7-runc.QBpELo.mount: Deactivated successfully. Dec 13 02:10:52.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.48:22-139.178.68.195:33994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:52.176774 systemd[1]: Started sshd@11-10.128.0.48:22-139.178.68.195:33994.service. Dec 13 02:10:52.184410 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:10:52.184552 kernel: audit: type=1130 audit(1734055852.175:425): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.48:22-139.178.68.195:33994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:52.489000 audit[5044]: USER_ACCT pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.493156 sshd[5044]: Accepted publickey for core from 139.178.68.195 port 33994 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:10:52.521417 kernel: audit: type=1101 audit(1734055852.489:426): pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.520000 audit[5044]: CRED_ACQ pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.522893 sshd[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:52.532329 systemd-logind[1329]: New session 10 of user core. Dec 13 02:10:52.534250 systemd[1]: Started session-10.scope. Dec 13 02:10:52.548526 kernel: audit: type=1103 audit(1734055852.520:427): pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.520000 audit[5044]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff95b07230 a2=3 a3=0 items=0 ppid=1 pid=5044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:52.594364 kernel: audit: type=1006 audit(1734055852.520:428): pid=5044 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 02:10:52.595135 kernel: audit: type=1300 audit(1734055852.520:428): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff95b07230 a2=3 a3=0 items=0 ppid=1 pid=5044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:52.595206 kernel: audit: type=1327 audit(1734055852.520:428): proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:52.520000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:52.548000 audit[5044]: USER_START pid=5044 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.636107 kernel: audit: type=1105 audit(1734055852.548:429): pid=5044 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.555000 audit[5055]: CRED_ACQ pid=5055 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.663584 kernel: audit: type=1103 audit(1734055852.555:430): pid=5055 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.818919 sshd[5044]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:52.819000 audit[5044]: USER_END pid=5044 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.854652 kernel: audit: type=1106 audit(1734055852.819:431): pid=5044 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.855493 systemd[1]: sshd@11-10.128.0.48:22-139.178.68.195:33994.service: Deactivated successfully. Dec 13 02:10:52.820000 audit[5044]: CRED_DISP pid=5044 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.866011 systemd[1]: Started sshd@12-10.128.0.48:22-139.178.68.195:34004.service. Dec 13 02:10:52.875128 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:10:52.877648 systemd-logind[1329]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:10:52.880569 kernel: audit: type=1104 audit(1734055852.820:432): pid=5044 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:52.879758 systemd-logind[1329]: Removed session 10. Dec 13 02:10:52.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.48:22-139.178.68.195:33994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:52.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.48:22-139.178.68.195:34004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:53.169000 audit[5066]: USER_ACCT pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:53.172458 sshd[5066]: Accepted publickey for core from 139.178.68.195 port 34004 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:10:53.171000 audit[5066]: CRED_ACQ pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:53.172000 audit[5066]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcef38e7a0 a2=3 a3=0 items=0 ppid=1 pid=5066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:53.172000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:53.173976 sshd[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:53.181505 systemd-logind[1329]: New session 11 of user core. Dec 13 02:10:53.183659 systemd[1]: Started session-11.scope. Dec 13 02:10:53.192000 audit[5066]: USER_START pid=5066 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:53.195000 audit[5069]: CRED_ACQ pid=5069 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:53.500957 sshd[5066]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:53.502000 audit[5066]: USER_END pid=5066 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:53.502000 audit[5066]: CRED_DISP pid=5066 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:53.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.48:22-139.178.68.195:34004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:53.507795 systemd[1]: sshd@12-10.128.0.48:22-139.178.68.195:34004.service: Deactivated successfully. Dec 13 02:10:53.510258 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:10:53.510828 systemd-logind[1329]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:10:53.512978 systemd-logind[1329]: Removed session 11. Dec 13 02:10:53.544174 systemd[1]: Started sshd@13-10.128.0.48:22-139.178.68.195:34006.service. Dec 13 02:10:53.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.48:22-139.178.68.195:34006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:53.833000 audit[5077]: USER_ACCT pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:53.835518 sshd[5077]: Accepted publickey for core from 139.178.68.195 port 34006 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:10:53.835000 audit[5077]: CRED_ACQ pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:53.836000 audit[5077]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd29c8330 a2=3 a3=0 items=0 ppid=1 pid=5077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:53.836000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:53.838266 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:53.844710 systemd-logind[1329]: New session 12 of user core. Dec 13 02:10:53.846165 systemd[1]: Started session-12.scope. Dec 13 02:10:53.863000 audit[5077]: USER_START pid=5077 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:53.865000 audit[5080]: CRED_ACQ pid=5080 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:54.134576 sshd[5077]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:54.134000 audit[5077]: USER_END pid=5077 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:54.135000 audit[5077]: CRED_DISP pid=5077 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:54.138680 systemd[1]: sshd@13-10.128.0.48:22-139.178.68.195:34006.service: Deactivated successfully. Dec 13 02:10:54.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.48:22-139.178.68.195:34006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:54.140496 systemd-logind[1329]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:10:54.140569 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:10:54.143443 systemd-logind[1329]: Removed session 12. Dec 13 02:10:59.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.48:22-139.178.68.195:34948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:59.179430 systemd[1]: Started sshd@14-10.128.0.48:22-139.178.68.195:34948.service. Dec 13 02:10:59.185288 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 02:10:59.185653 kernel: audit: type=1130 audit(1734055859.178:452): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.48:22-139.178.68.195:34948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:59.483000 audit[5092]: USER_ACCT pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.489084 sshd[5092]: Accepted publickey for core from 139.178.68.195 port 34948 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:10:59.517544 kernel: audit: type=1101 audit(1734055859.483:453): pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.517771 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:59.515000 audit[5092]: CRED_ACQ pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.534717 systemd[1]: Started session-13.scope. Dec 13 02:10:59.537463 systemd-logind[1329]: New session 13 of user core. Dec 13 02:10:59.560239 kernel: audit: type=1103 audit(1734055859.515:454): pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.560395 kernel: audit: type=1006 audit(1734055859.515:455): pid=5092 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 13 02:10:59.515000 audit[5092]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc82be6470 a2=3 a3=0 items=0 ppid=1 pid=5092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:59.592807 kernel: audit: type=1300 audit(1734055859.515:455): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc82be6470 a2=3 a3=0 items=0 ppid=1 pid=5092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:10:59.515000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:59.602521 kernel: audit: type=1327 audit(1734055859.515:455): proctitle=737368643A20636F7265205B707269765D Dec 13 02:10:59.548000 audit[5092]: USER_START pid=5092 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.561000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.661104 kernel: audit: type=1105 audit(1734055859.548:456): pid=5092 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.661299 kernel: audit: type=1103 audit(1734055859.561:457): pid=5095 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.858520 sshd[5092]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:59.861000 audit[5092]: USER_END pid=5092 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.889686 systemd[1]: sshd@14-10.128.0.48:22-139.178.68.195:34948.service: Deactivated successfully. Dec 13 02:10:59.891032 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:10:59.892111 systemd-logind[1329]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:10:59.895010 systemd-logind[1329]: Removed session 13. Dec 13 02:10:59.898499 kernel: audit: type=1106 audit(1734055859.861:458): pid=5092 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.861000 audit[5092]: CRED_DISP pid=5092 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:10:59.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.48:22-139.178.68.195:34948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:10:59.924485 kernel: audit: type=1104 audit(1734055859.861:459): pid=5092 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:01.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.48:22-218.92.0.190:35754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:01.221558 systemd[1]: Started sshd@15-10.128.0.48:22-218.92.0.190:35754.service. Dec 13 02:11:03.760150 update_engine[1332]: I1213 02:11:03.760092 1332 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 02:11:03.760150 update_engine[1332]: I1213 02:11:03.760152 1332 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 02:11:03.761317 update_engine[1332]: I1213 02:11:03.761283 1332 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 02:11:03.762050 update_engine[1332]: I1213 02:11:03.762017 1332 omaha_request_params.cc:62] Current group set to lts Dec 13 02:11:03.762415 update_engine[1332]: I1213 02:11:03.762271 1332 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 02:11:03.762415 update_engine[1332]: I1213 02:11:03.762287 1332 update_attempter.cc:643] Scheduling an action processor start. Dec 13 02:11:03.762415 update_engine[1332]: I1213 02:11:03.762313 1332 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 02:11:03.762415 update_engine[1332]: I1213 02:11:03.762359 1332 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 02:11:03.762650 update_engine[1332]: I1213 02:11:03.762476 1332 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 02:11:03.762650 update_engine[1332]: I1213 02:11:03.762486 1332 omaha_request_action.cc:271] Request: Dec 13 02:11:03.762650 update_engine[1332]: Dec 13 02:11:03.762650 update_engine[1332]: Dec 13 02:11:03.762650 update_engine[1332]: Dec 13 02:11:03.762650 update_engine[1332]: Dec 13 02:11:03.762650 update_engine[1332]: Dec 13 02:11:03.762650 update_engine[1332]: Dec 13 02:11:03.762650 update_engine[1332]: Dec 13 02:11:03.762650 update_engine[1332]: Dec 13 02:11:03.762650 update_engine[1332]: I1213 02:11:03.762495 1332 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:11:03.767491 locksmithd[1387]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 02:11:03.767963 update_engine[1332]: I1213 02:11:03.767760 1332 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:11:03.768160 update_engine[1332]: I1213 02:11:03.768126 1332 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:11:03.838279 update_engine[1332]: E1213 02:11:03.838019 1332 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:11:03.839543 update_engine[1332]: I1213 02:11:03.839483 1332 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 02:11:04.904841 systemd[1]: Started sshd@16-10.128.0.48:22-139.178.68.195:34960.service. Dec 13 02:11:04.936131 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 13 02:11:04.936262 kernel: audit: type=1130 audit(1734055864.904:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.48:22-139.178.68.195:34960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:04.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.48:22-139.178.68.195:34960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:05.197000 audit[5111]: USER_ACCT pid=5111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.200180 sshd[5111]: Accepted publickey for core from 139.178.68.195 port 34960 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:05.228452 kernel: audit: type=1101 audit(1734055865.197:463): pid=5111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.227000 audit[5111]: CRED_ACQ pid=5111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.229956 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:05.238871 systemd[1]: Started session-14.scope. Dec 13 02:11:05.242638 systemd-logind[1329]: New session 14 of user core. Dec 13 02:11:05.255451 kernel: audit: type=1103 audit(1734055865.227:464): pid=5111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.228000 audit[5111]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9d14d530 a2=3 a3=0 items=0 ppid=1 pid=5111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:05.272533 kernel: audit: type=1006 audit(1734055865.228:465): pid=5111 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 02:11:05.272610 kernel: audit: type=1300 audit(1734055865.228:465): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9d14d530 a2=3 a3=0 items=0 ppid=1 pid=5111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:05.228000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:05.300703 kernel: audit: type=1327 audit(1734055865.228:465): proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:05.255000 audit[5111]: USER_START pid=5111 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.258000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.366554 kernel: audit: type=1105 audit(1734055865.255:466): pid=5111 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.366739 kernel: audit: type=1103 audit(1734055865.258:467): pid=5114 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.555070 sshd[5111]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:05.590650 kernel: audit: type=1106 audit(1734055865.557:468): pid=5111 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.557000 audit[5111]: USER_END pid=5111 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.561240 systemd-logind[1329]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:11:05.563585 systemd[1]: sshd@16-10.128.0.48:22-139.178.68.195:34960.service: Deactivated successfully. Dec 13 02:11:05.564888 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:11:05.567568 systemd-logind[1329]: Removed session 14. Dec 13 02:11:05.557000 audit[5111]: CRED_DISP pid=5111 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:05.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.48:22-139.178.68.195:34960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:05.616586 kernel: audit: type=1104 audit(1734055865.557:469): pid=5111 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:07.281000 audit[5109]: USER_AUTH pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:11:07.282642 sshd[5109]: Failed password for root from 218.92.0.190 port 35754 ssh2 Dec 13 02:11:07.558000 audit[5109]: USER_AUTH pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:11:07.559220 sshd[5109]: Failed password for root from 218.92.0.190 port 35754 ssh2 Dec 13 02:11:09.969000 audit[5109]: USER_AUTH pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:11:09.970441 sshd[5109]: Failed password for root from 218.92.0.190 port 35754 ssh2 Dec 13 02:11:09.975501 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 02:11:09.975663 kernel: audit: type=1100 audit(1734055869.969:473): pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.190 addr=218.92.0.190 terminal=ssh res=failed' Dec 13 02:11:10.264736 sshd[5109]: Received disconnect from 218.92.0.190 port 35754:11: [preauth] Dec 13 02:11:10.264736 sshd[5109]: Disconnected from authenticating user root 218.92.0.190 port 35754 [preauth] Dec 13 02:11:10.266691 systemd[1]: sshd@15-10.128.0.48:22-218.92.0.190:35754.service: Deactivated successfully. Dec 13 02:11:10.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.48:22-218.92.0.190:35754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:10.292492 kernel: audit: type=1131 audit(1734055870.266:474): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.48:22-218.92.0.190:35754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:10.601485 systemd[1]: Started sshd@17-10.128.0.48:22-139.178.68.195:45474.service. Dec 13 02:11:10.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.48:22-139.178.68.195:45474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:10.627430 kernel: audit: type=1130 audit(1734055870.601:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.48:22-139.178.68.195:45474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:10.898000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:10.899444 sshd[5126]: Accepted publickey for core from 139.178.68.195 port 45474 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:10.929967 kernel: audit: type=1101 audit(1734055870.898:476): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:10.928000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:10.930295 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:10.940259 systemd[1]: Started session-15.scope. Dec 13 02:11:10.941461 systemd-logind[1329]: New session 15 of user core. Dec 13 02:11:10.957667 kernel: audit: type=1103 audit(1734055870.928:477): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:10.929000 audit[5126]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0df4abe0 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:10.975420 kernel: audit: type=1006 audit(1734055870.929:478): pid=5126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 02:11:10.975492 kernel: audit: type=1300 audit(1734055870.929:478): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0df4abe0 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:10.929000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:11.003530 kernel: audit: type=1327 audit(1734055870.929:478): proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:10.952000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:11.044925 kernel: audit: type=1105 audit(1734055870.952:479): pid=5126 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:11.045115 kernel: audit: type=1103 audit(1734055870.955:480): pid=5129 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:10.955000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:11.220771 sshd[5126]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:11.223000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:11.223000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:11.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.48:22-139.178.68.195:45474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:11.228274 systemd[1]: sshd@17-10.128.0.48:22-139.178.68.195:45474.service: Deactivated successfully. Dec 13 02:11:11.229084 systemd-logind[1329]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:11:11.231014 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:11:11.232405 systemd-logind[1329]: Removed session 15. Dec 13 02:11:13.758476 update_engine[1332]: I1213 02:11:13.758376 1332 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:11:13.759068 update_engine[1332]: I1213 02:11:13.758759 1332 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:11:13.759068 update_engine[1332]: I1213 02:11:13.759042 1332 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:11:13.768247 update_engine[1332]: E1213 02:11:13.768189 1332 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:11:13.768451 update_engine[1332]: I1213 02:11:13.768342 1332 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 02:11:16.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.48:22-139.178.68.195:36560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:16.266298 systemd[1]: Started sshd@18-10.128.0.48:22-139.178.68.195:36560.service. Dec 13 02:11:16.272247 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 02:11:16.272356 kernel: audit: type=1130 audit(1734055876.266:484): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.48:22-139.178.68.195:36560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:16.564000 audit[5169]: USER_ACCT pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.567488 sshd[5169]: Accepted publickey for core from 139.178.68.195 port 36560 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:16.595442 kernel: audit: type=1101 audit(1734055876.564:485): pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.596156 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:16.606098 systemd[1]: Started session-16.scope. Dec 13 02:11:16.608155 systemd-logind[1329]: New session 16 of user core. Dec 13 02:11:16.594000 audit[5169]: CRED_ACQ pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.645432 kernel: audit: type=1103 audit(1734055876.594:486): pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.594000 audit[5169]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb32913e0 a2=3 a3=0 items=0 ppid=1 pid=5169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:16.705210 kernel: audit: type=1006 audit(1734055876.594:487): pid=5169 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 02:11:16.705458 kernel: audit: type=1300 audit(1734055876.594:487): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb32913e0 a2=3 a3=0 items=0 ppid=1 pid=5169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:16.705532 kernel: audit: type=1327 audit(1734055876.594:487): proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:16.594000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:16.618000 audit[5169]: USER_START pid=5169 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.747270 kernel: audit: type=1105 audit(1734055876.618:488): pid=5169 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.622000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.772417 kernel: audit: type=1103 audit(1734055876.622:489): pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.893356 sshd[5169]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:16.895000 audit[5169]: USER_END pid=5169 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.899533 systemd-logind[1329]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:11:16.902311 systemd[1]: sshd@18-10.128.0.48:22-139.178.68.195:36560.service: Deactivated successfully. Dec 13 02:11:16.903693 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:11:16.907149 systemd-logind[1329]: Removed session 16. Dec 13 02:11:16.929502 kernel: audit: type=1106 audit(1734055876.895:490): pid=5169 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.929672 kernel: audit: type=1104 audit(1734055876.896:491): pid=5169 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.896000 audit[5169]: CRED_DISP pid=5169 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:16.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.48:22-139.178.68.195:36560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:16.958784 systemd[1]: Started sshd@19-10.128.0.48:22-139.178.68.195:36568.service. Dec 13 02:11:16.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.48:22-139.178.68.195:36568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:17.166808 systemd[1]: run-containerd-runc-k8s.io-9a1f513ad2547062f552f99f115ea44e10899cddd274780d466c01cc5f13aecc-runc.F7OR4e.mount: Deactivated successfully. Dec 13 02:11:17.273000 audit[5182]: USER_ACCT pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:17.274764 sshd[5182]: Accepted publickey for core from 139.178.68.195 port 36568 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:17.276000 audit[5182]: CRED_ACQ pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:17.276000 audit[5182]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff330ec210 a2=3 a3=0 items=0 ppid=1 pid=5182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:17.276000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:17.277214 sshd[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:17.283992 systemd[1]: Started session-17.scope. Dec 13 02:11:17.286500 systemd-logind[1329]: New session 17 of user core. Dec 13 02:11:17.296000 audit[5182]: USER_START pid=5182 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:17.298000 audit[5206]: CRED_ACQ pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:17.664165 sshd[5182]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:17.666000 audit[5182]: USER_END pid=5182 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:17.666000 audit[5182]: CRED_DISP pid=5182 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:17.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.48:22-139.178.68.195:36568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:17.668764 systemd[1]: sshd@19-10.128.0.48:22-139.178.68.195:36568.service: Deactivated successfully. Dec 13 02:11:17.670153 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:11:17.671799 systemd-logind[1329]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:11:17.673312 systemd-logind[1329]: Removed session 17. Dec 13 02:11:17.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.48:22-139.178.68.195:36574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:17.709433 systemd[1]: Started sshd@20-10.128.0.48:22-139.178.68.195:36574.service. Dec 13 02:11:18.003000 audit[5213]: USER_ACCT pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:18.004636 sshd[5213]: Accepted publickey for core from 139.178.68.195 port 36574 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:18.006000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:18.006000 audit[5213]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd64950ab0 a2=3 a3=0 items=0 ppid=1 pid=5213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:18.006000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:18.007654 sshd[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:18.015348 systemd[1]: Started session-18.scope. Dec 13 02:11:18.016580 systemd-logind[1329]: New session 18 of user core. Dec 13 02:11:18.024000 audit[5213]: USER_START pid=5213 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:18.027000 audit[5216]: CRED_ACQ pid=5216 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:20.765000 audit[5226]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=5226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:11:20.765000 audit[5226]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc6d208980 a2=0 a3=7ffc6d20896c items=0 ppid=2502 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:20.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:11:20.768000 audit[5226]: NETFILTER_CFG table=nat:118 family=2 entries=22 op=nft_register_rule pid=5226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:11:20.768000 audit[5226]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc6d208980 a2=0 a3=0 items=0 ppid=2502 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:20.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:11:20.792000 audit[5228]: NETFILTER_CFG table=filter:119 family=2 entries=32 op=nft_register_rule pid=5228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:11:20.792000 audit[5228]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffedd35c970 a2=0 a3=7ffedd35c95c items=0 ppid=2502 pid=5228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:20.792000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:11:20.809732 sshd[5213]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:20.810000 audit[5213]: USER_END pid=5213 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:20.810000 audit[5213]: CRED_DISP pid=5213 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:20.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.48:22-139.178.68.195:36574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:20.815000 audit[5228]: NETFILTER_CFG table=nat:120 family=2 entries=22 op=nft_register_rule pid=5228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:11:20.815000 audit[5228]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffedd35c970 a2=0 a3=0 items=0 ppid=2502 pid=5228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:20.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:11:20.815541 systemd-logind[1329]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:11:20.816837 systemd[1]: sshd@20-10.128.0.48:22-139.178.68.195:36574.service: Deactivated successfully. Dec 13 02:11:20.818025 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:11:20.820366 systemd-logind[1329]: Removed session 18. Dec 13 02:11:20.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.48:22-139.178.68.195:36578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:20.852872 systemd[1]: Started sshd@21-10.128.0.48:22-139.178.68.195:36578.service. Dec 13 02:11:21.157000 audit[5231]: USER_ACCT pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:21.158884 sshd[5231]: Accepted publickey for core from 139.178.68.195 port 36578 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:21.159000 audit[5231]: CRED_ACQ pid=5231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:21.159000 audit[5231]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffbdf51fb0 a2=3 a3=0 items=0 ppid=1 pid=5231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:21.159000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:21.161870 sshd[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:21.174174 systemd[1]: Started session-19.scope. Dec 13 02:11:21.174778 systemd-logind[1329]: New session 19 of user core. Dec 13 02:11:21.188000 audit[5231]: USER_START pid=5231 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:21.191000 audit[5234]: CRED_ACQ pid=5234 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:21.848071 sshd[5231]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:21.859832 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 02:11:21.859997 kernel: audit: type=1106 audit(1734055881.848:521): pid=5231 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:21.848000 audit[5231]: USER_END pid=5231 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:21.853150 systemd[1]: sshd@21-10.128.0.48:22-139.178.68.195:36578.service: Deactivated successfully. Dec 13 02:11:21.854785 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:11:21.861501 systemd-logind[1329]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:11:21.863462 systemd-logind[1329]: Removed session 19. Dec 13 02:11:21.890258 kernel: audit: type=1104 audit(1734055881.848:522): pid=5231 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:21.848000 audit[5231]: CRED_DISP pid=5231 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:21.902272 systemd[1]: Started sshd@22-10.128.0.48:22-139.178.68.195:36584.service. Dec 13 02:11:21.919705 kernel: audit: type=1131 audit(1734055881.852:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.48:22-139.178.68.195:36578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:21.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.48:22-139.178.68.195:36578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:21.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.48:22-139.178.68.195:36584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:21.964475 kernel: audit: type=1130 audit(1734055881.901:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.48:22-139.178.68.195:36584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:22.213000 audit[5242]: USER_ACCT pid=5242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:22.215730 sshd[5242]: Accepted publickey for core from 139.178.68.195 port 36584 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:22.223905 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:22.244586 kernel: audit: type=1101 audit(1734055882.213:525): pid=5242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:22.276442 kernel: audit: type=1103 audit(1734055882.219:526): pid=5242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:22.219000 audit[5242]: CRED_ACQ pid=5242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:22.255819 systemd[1]: Started session-20.scope. Dec 13 02:11:22.257504 systemd-logind[1329]: New session 20 of user core. Dec 13 02:11:22.296425 kernel: audit: type=1006 audit(1734055882.219:527): pid=5242 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 02:11:22.219000 audit[5242]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8f5529b0 a2=3 a3=0 items=0 ppid=1 pid=5242 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:22.219000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:22.334331 kernel: audit: type=1300 audit(1734055882.219:527): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8f5529b0 a2=3 a3=0 items=0 ppid=1 pid=5242 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:22.334539 kernel: audit: type=1327 audit(1734055882.219:527): proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:22.267000 audit[5242]: USER_START pid=5242 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:22.367425 kernel: audit: type=1105 audit(1734055882.267:528): pid=5242 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:22.270000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:22.590217 sshd[5242]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:22.593000 audit[5242]: USER_END pid=5242 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:22.593000 audit[5242]: CRED_DISP pid=5242 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:22.597837 systemd-logind[1329]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:11:22.598103 systemd[1]: sshd@22-10.128.0.48:22-139.178.68.195:36584.service: Deactivated successfully. Dec 13 02:11:22.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.48:22-139.178.68.195:36584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:22.601325 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:11:22.602169 systemd-logind[1329]: Removed session 20. Dec 13 02:11:23.757554 update_engine[1332]: I1213 02:11:23.757498 1332 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:11:23.758130 update_engine[1332]: I1213 02:11:23.757851 1332 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:11:23.758130 update_engine[1332]: I1213 02:11:23.758077 1332 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:11:23.770071 update_engine[1332]: E1213 02:11:23.770022 1332 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:11:23.770267 update_engine[1332]: I1213 02:11:23.770187 1332 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 02:11:27.209000 audit[5257]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:11:27.216368 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 02:11:27.216573 kernel: audit: type=1325 audit(1734055887.209:533): table=filter:121 family=2 entries=20 op=nft_register_rule pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:11:27.209000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe9fafede0 a2=0 a3=7ffe9fafedcc items=0 ppid=2502 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:27.270244 kernel: audit: type=1300 audit(1734055887.209:533): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe9fafede0 a2=0 a3=7ffe9fafedcc items=0 ppid=2502 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:27.270473 kernel: audit: type=1327 audit(1734055887.209:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:11:27.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:11:27.290000 audit[5257]: NETFILTER_CFG table=nat:122 family=2 entries=106 op=nft_register_chain pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:11:27.311425 kernel: audit: type=1325 audit(1734055887.290:534): table=nat:122 family=2 entries=106 op=nft_register_chain pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:11:27.290000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffe9fafede0 a2=0 a3=7ffe9fafedcc items=0 ppid=2502 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:27.348489 kernel: audit: type=1300 audit(1734055887.290:534): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffe9fafede0 a2=0 a3=7ffe9fafedcc items=0 ppid=2502 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:27.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:11:27.373414 kernel: audit: type=1327 audit(1734055887.290:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:11:27.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.48:22-139.178.68.195:60862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:27.635600 systemd[1]: Started sshd@23-10.128.0.48:22-139.178.68.195:60862.service. Dec 13 02:11:27.661429 kernel: audit: type=1130 audit(1734055887.634:535): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.48:22-139.178.68.195:60862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:27.935000 audit[5259]: USER_ACCT pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:27.938561 sshd[5259]: Accepted publickey for core from 139.178.68.195 port 60862 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:27.967495 kernel: audit: type=1101 audit(1734055887.935:536): pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:27.968808 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:27.966000 audit[5259]: CRED_ACQ pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:27.984073 systemd[1]: Started session-21.scope. Dec 13 02:11:27.985369 systemd-logind[1329]: New session 21 of user core. Dec 13 02:11:27.996804 kernel: audit: type=1103 audit(1734055887.966:537): pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:27.966000 audit[5259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc91d4030 a2=3 a3=0 items=0 ppid=1 pid=5259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:27.966000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:27.996000 audit[5259]: USER_START pid=5259 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:28.004000 audit[5262]: CRED_ACQ pid=5262 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:28.027443 kernel: audit: type=1006 audit(1734055887.966:538): pid=5259 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 13 02:11:28.285025 sshd[5259]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:28.286000 audit[5259]: USER_END pid=5259 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:28.286000 audit[5259]: CRED_DISP pid=5259 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:28.290699 systemd[1]: sshd@23-10.128.0.48:22-139.178.68.195:60862.service: Deactivated successfully. Dec 13 02:11:28.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.48:22-139.178.68.195:60862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:28.292378 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:11:28.292446 systemd-logind[1329]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:11:28.295344 systemd-logind[1329]: Removed session 21. Dec 13 02:11:33.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.48:22-139.178.68.195:60868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:33.330074 systemd[1]: Started sshd@24-10.128.0.48:22-139.178.68.195:60868.service. Dec 13 02:11:33.351417 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 02:11:33.351557 kernel: audit: type=1130 audit(1734055893.329:544): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.48:22-139.178.68.195:60868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:33.627000 audit[5274]: USER_ACCT pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:33.631080 sshd[5274]: Accepted publickey for core from 139.178.68.195 port 60868 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:33.657000 audit[5274]: CRED_ACQ pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:33.659879 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:33.670293 systemd[1]: Started session-22.scope. Dec 13 02:11:33.672693 systemd-logind[1329]: New session 22 of user core. Dec 13 02:11:33.684520 kernel: audit: type=1101 audit(1734055893.627:545): pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:33.684670 kernel: audit: type=1103 audit(1734055893.657:546): pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:33.691984 kernel: audit: type=1006 audit(1734055893.657:547): pid=5274 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 02:11:33.657000 audit[5274]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5f256670 a2=3 a3=0 items=0 ppid=1 pid=5274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:33.701667 kernel: audit: type=1300 audit(1734055893.657:547): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5f256670 a2=3 a3=0 items=0 ppid=1 pid=5274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:33.657000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:33.738855 kernel: audit: type=1327 audit(1734055893.657:547): proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:33.739005 kernel: audit: type=1105 audit(1734055893.680:548): pid=5274 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:33.680000 audit[5274]: USER_START pid=5274 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:33.756436 update_engine[1332]: I1213 02:11:33.755755 1332 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:11:33.756436 update_engine[1332]: I1213 02:11:33.756098 1332 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:11:33.756436 update_engine[1332]: I1213 02:11:33.756363 1332 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:11:33.771455 kernel: audit: type=1103 audit(1734055893.688:549): pid=5277 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:33.688000 audit[5277]: CRED_ACQ pid=5277 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:33.837761 update_engine[1332]: E1213 02:11:33.836585 1332 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.836735 1332 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.836747 1332 omaha_request_action.cc:621] Omaha request response: Dec 13 02:11:33.837761 update_engine[1332]: E1213 02:11:33.836895 1332 omaha_request_action.cc:640] Omaha request network transfer failed. Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.836920 1332 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.836964 1332 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.836971 1332 update_attempter.cc:306] Processing Done. Dec 13 02:11:33.837761 update_engine[1332]: E1213 02:11:33.836992 1332 update_attempter.cc:619] Update failed. Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.837000 1332 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.837018 1332 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.837026 1332 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.837126 1332 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.837153 1332 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 02:11:33.837761 update_engine[1332]: I1213 02:11:33.837162 1332 omaha_request_action.cc:271] Request: Dec 13 02:11:33.837761 update_engine[1332]: Dec 13 02:11:33.837761 update_engine[1332]: Dec 13 02:11:33.837761 update_engine[1332]: Dec 13 02:11:33.838831 update_engine[1332]: Dec 13 02:11:33.838831 update_engine[1332]: Dec 13 02:11:33.838831 update_engine[1332]: Dec 13 02:11:33.838831 update_engine[1332]: I1213 02:11:33.837170 1332 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:11:33.838831 update_engine[1332]: I1213 02:11:33.837502 1332 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:11:33.838831 update_engine[1332]: I1213 02:11:33.837713 1332 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:11:33.841330 locksmithd[1387]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 02:11:33.849115 update_engine[1332]: E1213 02:11:33.848784 1332 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:11:33.849115 update_engine[1332]: I1213 02:11:33.848922 1332 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 02:11:33.849115 update_engine[1332]: I1213 02:11:33.848935 1332 omaha_request_action.cc:621] Omaha request response: Dec 13 02:11:33.849115 update_engine[1332]: I1213 02:11:33.848945 1332 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:11:33.849115 update_engine[1332]: I1213 02:11:33.848951 1332 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:11:33.849115 update_engine[1332]: I1213 02:11:33.848958 1332 update_attempter.cc:306] Processing Done. Dec 13 02:11:33.849115 update_engine[1332]: I1213 02:11:33.848966 1332 update_attempter.cc:310] Error event sent. Dec 13 02:11:33.849115 update_engine[1332]: I1213 02:11:33.848979 1332 update_check_scheduler.cc:74] Next update check in 40m39s Dec 13 02:11:33.850228 locksmithd[1387]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 02:11:33.970650 sshd[5274]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:33.971000 audit[5274]: USER_END pid=5274 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:33.974000 audit[5274]: CRED_DISP pid=5274 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:34.008265 systemd[1]: sshd@24-10.128.0.48:22-139.178.68.195:60868.service: Deactivated successfully. Dec 13 02:11:34.010025 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:11:34.019933 systemd-logind[1329]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:11:34.022004 systemd-logind[1329]: Removed session 22. Dec 13 02:11:34.030064 kernel: audit: type=1106 audit(1734055893.971:550): pid=5274 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:34.030177 kernel: audit: type=1104 audit(1734055893.974:551): pid=5274 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:34.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.48:22-139.178.68.195:60868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:39.016241 systemd[1]: Started sshd@25-10.128.0.48:22-139.178.68.195:36714.service. Dec 13 02:11:39.033474 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:11:39.033596 kernel: audit: type=1130 audit(1734055899.015:553): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.48:22-139.178.68.195:36714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:39.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.48:22-139.178.68.195:36714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:39.312000 audit[5287]: USER_ACCT pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.314453 sshd[5287]: Accepted publickey for core from 139.178.68.195 port 36714 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:39.347728 kernel: audit: type=1101 audit(1734055899.312:554): pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.348935 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:39.346000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.364727 systemd[1]: Started session-23.scope. Dec 13 02:11:39.366694 systemd-logind[1329]: New session 23 of user core. Dec 13 02:11:39.375409 kernel: audit: type=1103 audit(1734055899.346:555): pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.400429 kernel: audit: type=1006 audit(1734055899.346:556): pid=5287 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 02:11:39.400615 kernel: audit: type=1300 audit(1734055899.346:556): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe095766f0 a2=3 a3=0 items=0 ppid=1 pid=5287 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:39.346000 audit[5287]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe095766f0 a2=3 a3=0 items=0 ppid=1 pid=5287 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:39.346000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:39.427863 kernel: audit: type=1327 audit(1734055899.346:556): proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:39.375000 audit[5287]: USER_START pid=5287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.469239 kernel: audit: type=1105 audit(1734055899.375:557): pid=5287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.469520 kernel: audit: type=1103 audit(1734055899.378:558): pid=5290 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.378000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.666068 systemd[1]: Started sshd@26-10.128.0.48:22-194.169.175.37:33726.service. Dec 13 02:11:39.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.128.0.48:22-194.169.175.37:33726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:39.692918 kernel: audit: type=1130 audit(1734055899.666:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.128.0.48:22-194.169.175.37:33726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:39.692366 sshd[5287]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:39.692000 audit[5287]: USER_END pid=5287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.697233 systemd[1]: sshd@25-10.128.0.48:22-139.178.68.195:36714.service: Deactivated successfully. Dec 13 02:11:39.698834 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:11:39.726123 systemd-logind[1329]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:11:39.728903 kernel: audit: type=1106 audit(1734055899.692:560): pid=5287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.730079 systemd-logind[1329]: Removed session 23. Dec 13 02:11:39.693000 audit[5287]: CRED_DISP pid=5287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:39.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.48:22-139.178.68.195:36714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:40.640442 sshd[5298]: Invalid user 1234 from 194.169.175.37 port 33726 Dec 13 02:11:40.789000 audit[5298]: USER_AUTH pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="1234" exe="/usr/sbin/sshd" hostname=194.169.175.37 addr=194.169.175.37 terminal=ssh res=failed' Dec 13 02:11:40.791395 sshd[5298]: Failed password for invalid user 1234 from 194.169.175.37 port 33726 ssh2 Dec 13 02:11:40.926725 sshd[5298]: Connection closed by invalid user 1234 194.169.175.37 port 33726 [preauth] Dec 13 02:11:40.928555 systemd[1]: sshd@26-10.128.0.48:22-194.169.175.37:33726.service: Deactivated successfully. Dec 13 02:11:40.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.128.0.48:22-194.169.175.37:33726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:43.265828 systemd[1]: run-containerd-runc-k8s.io-9614734e64ea42471fda732a426d00c5aa0e51883e9a1b41b233d257a145bbd7-runc.1xd4DL.mount: Deactivated successfully. Dec 13 02:11:44.726344 systemd[1]: Started sshd@27-10.128.0.48:22-139.178.68.195:36716.service. Dec 13 02:11:44.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.128.0.48:22-139.178.68.195:36716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:44.732584 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 02:11:44.732668 kernel: audit: type=1130 audit(1734055904.726:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.128.0.48:22-139.178.68.195:36716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:11:45.024000 audit[5327]: USER_ACCT pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.026179 sshd[5327]: Accepted publickey for core from 139.178.68.195 port 36716 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:11:45.054000 audit[5327]: CRED_ACQ pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.056055 sshd[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:11:45.070321 systemd[1]: Started session-24.scope. Dec 13 02:11:45.072700 systemd-logind[1329]: New session 24 of user core. Dec 13 02:11:45.081800 kernel: audit: type=1101 audit(1734055905.024:566): pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.081928 kernel: audit: type=1103 audit(1734055905.054:567): pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.055000 audit[5327]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff137d1910 a2=3 a3=0 items=0 ppid=1 pid=5327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:45.105477 kernel: audit: type=1006 audit(1734055905.055:568): pid=5327 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 02:11:45.105568 kernel: audit: type=1300 audit(1734055905.055:568): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff137d1910 a2=3 a3=0 items=0 ppid=1 pid=5327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:11:45.055000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:45.133579 kernel: audit: type=1327 audit(1734055905.055:568): proctitle=737368643A20636F7265205B707269765D Dec 13 02:11:45.082000 audit[5327]: USER_START pid=5327 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.175356 kernel: audit: type=1105 audit(1734055905.082:569): pid=5327 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.087000 audit[5330]: CRED_ACQ pid=5330 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.200832 kernel: audit: type=1103 audit(1734055905.087:570): pid=5330 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.347281 sshd[5327]: pam_unix(sshd:session): session closed for user core Dec 13 02:11:45.349000 audit[5327]: USER_END pid=5327 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.353578 systemd-logind[1329]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:11:45.356427 systemd[1]: sshd@27-10.128.0.48:22-139.178.68.195:36716.service: Deactivated successfully. Dec 13 02:11:45.357784 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:11:45.360446 systemd-logind[1329]: Removed session 24. Dec 13 02:11:45.350000 audit[5327]: CRED_DISP pid=5327 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.383425 kernel: audit: type=1106 audit(1734055905.349:571): pid=5327 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.383496 kernel: audit: type=1104 audit(1734055905.350:572): pid=5327 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 13 02:11:45.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.128.0.48:22-139.178.68.195:36716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'